openvino could not compile blob from frozen tensorflow pb, xml or bin model - darknet

openvino 2021.1 up and running
downloaded yolov3_tiny.weights and yolov3_tiny.cfg files from https://pjreddie.com/darknet/yolo/
As suggested in this link (https://colab.research.google.com/github/luxonis/depthai-ml-training/blob/master/colab-notebooks/Easy_TinyYolov3_Object_Detector_Training_on_Custom_Data.ipynb#scrollTo=2tojp0Wd-Pdw) downloaded https://github.com/mystic123/tensorflow-yolo-v3
used convert_weights_pb.py file to convert the weights and cfg file to a frozen yolov3 tiny .pb model.
python3 convert_weights_pb.py --class_names /home/user/depthai-python/my_job/coco.names --data_format NHWC --weights_file /home/user/depthai-python/my_job/yolov3-tiny.weights --tiny
used openvino mo.py file to convert yolov3_tiny .pb model to IR files .xml and .bin
python3 mo.py --input_model /home/user/depthai-python/my_job/frozen_darknet_yolov3_model.pb --tensorflow_use_custom_operations_config /home/user/depthai-python/my_job/yolo_v3_tiny.json --batch 1 --data_type FP16 --reverse_input_channel --output_dir /home/user/depthai-python/my_job
used this script as a python file to convert .xml and .bin to .blob file
blob_dir = "./my_job/"
binfile = "./my_job/frozen_darknet_yolov3_model.bin"
xmlfile = "./my_job/frozen_darknet_yolov3_model.xml"
import requests
url = "http://69.164.214.171:8083/compile" # change if running against other URL
payload = {
'compiler_params': '-ip U8 -VPU_NUMBER_OF_SHAVES 8 -VPU_NUMBER_OF_CMX_SLICES 8',
'compile_type': 'myriad'
}
files = {
'definition': open(xmlfile, 'rb'),
'weights': open(binfile, 'rb')
}
params = {
'version': '2021.1', # OpenVINO version, can be "2021.1", "2020.4", "2020.3", "2020.2", "2020.1", "2019.R3"
}
response = requests.post(url, data=payload, files=files, params=params)
print(response.headers)
print(response.content)
blobnameraw = response.headers.get('Content-Disposition')
print('blobnameraw',blobnameraw)
blobname = blobnameraw[blobnameraw.find('='):][1:]
with open(blob_dir + blobname, 'wb') as f:
f.write(response.content)
got the following error
{'Content-Type': 'application/json', 'Content-Length': '564', 'Server': 'Werkzeug/1.0.0 Python/3.6.9', 'Date': 'Fri, 09 Apr 2021 00:25:33 GMT'}
b'{"exit_code":1,"message":"Command failed with exit code 1, command: /opt/intel/openvino/deployment_tools/inference_engine/lib/intel64/myriad_compile -m /tmp/blobconverter/b9ea1f9cdb2c44bcb9bb2676ff414bf3/frozen_darknet_yolov3_model.xml -o /tmp/blobconverter/b9ea1f9cdb2c44bcb9bb2676ff414bf3/frozen_darknet_yolov3_model.blob -ip U8 -VPU_NUMBER_OF_SHAVES 8 -VPU_NUMBER_OF_CMX_SLICES 8","stderr":"stoi\n","stdout":"Inference Engine: \n\tAPI version ............ 2.1\n\tBuild .................. 2021.1.0-1237-bece22ac675-releases/2021/1\n\tDescription ....... API\n"}\n'
blobnameraw None
Traceback (most recent call last):
File "converter.py", line 29, in
blobname = blobnameraw[blobnameraw.find('='):][1:]
AttributeError: 'NoneType' object has no attribute 'find'
Alternatively i have tried the online blob converter tool from openvino http://69.164.214.171:8083/ gives the error for both .xm and .bin to .blob or from .pb to .blob
Anyone have idea.. i have tried all versions of openvino

We recommend you to use the myriad_compile.exe or compile_tool to convert your model to blob. Compile tool is a C++ application that enables you to compile a network for inference on a specific device and export it to a binary file. With the Compile Tool, you can compile a network using supported Inference Engine plugins on a machine that doesn't have the physical device connected and then transfer a generated file to any machine with the target inference device available.

Related

How to remove toaster from yocto Project

I am using yocto to build an image for the Up-Board following this guide: https://github.com/AaeonCM/meta-up-board
I tried to use Toaster for this project but later found out that the Up-Board isn't supported by toaster.
After I ran source toaster start I was no longer able to build form the command line with bitbake.
I get an error when I try to add an existing project by build directory in toaster, I think this is because the board is not supported.
The error I get when I try to use the 'Import command line project' feature in Toaster web browser is:
Environment:
Request Method: POST
Request URL: http://127.0.0.1:8000/toastergui/newproject/
Django Version: 2.2.27
Python Version: 3.6.9
Installed Applications:
('django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.messages',
'django.contrib.sessions',
'django.contrib.admin',
'django.contrib.staticfiles',
'django.contrib.humanize',
'bldcollector',
'toastermain',
'bldcontrol',
'toastergui',
'orm')
Installed Middleware:
['django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware']
Traceback:
File "/home/dave/.local/lib/python3.6/site-packages/django/core/handlers/exception.py" in inner
34. response = get_response(request)
File "/home/dave/.local/lib/python3.6/site-packages/django/core/handlers/base.py" in _get_response
115. response = self.process_exception_by_middleware(e, request)
File "/home/dave/.local/lib/python3.6/site-packages/django/core/handlers/base.py" in _get_response
113. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/media/drive2/yocto2/poky/bitbake/lib/toaster/toastergui/views.py" in newproject
1407. management.call_command('buildimport', '--command=import', '--name=%s' % request.POST['projectname'], '--path=%s' % request.POST['importdir'], interactive=False)
File "/home/dave/.local/lib/python3.6/site-packages/django/core/management/__init__.py" in call_command
140. ', '.join(sorted(valid_options)),
Exception Type: TypeError at /toastergui/newproject/
Exception Value: Unknown option(s) for buildimport command: interactive. Valid options are: callback, command, delete_project, force_color, help, name, no_color, path, pythonpath, release, settings, skip_checks, stderr, stdout, traceback, verbosity, version.
The error I get when I try to build my project again from the command line is:
/poky/build$ MACHINE=up-board bitbake upboard-image-sato
Loading cache: 100% |#####################################################################################################################################################################| Time: 0:00:00
Loaded 3393 entries from dependency cache.
NOTE: Resolving any missing task queue dependencies
Build Configuration:
BB_VERSION = "1.46.0"
BUILD_SYS = "x86_64-linux"
NATIVELSBSTRING = "universal"
TARGET_SYS = "x86_64-poky-linux"
MACHINE = "up-board"
DISTRO = "poky"
DISTRO_VERSION = "3.1.14"
TUNE_FEATURES = "m64 corei7"
TARGET_FPU = ""
meta
meta-poky
meta-yocto-bsp = "dunfell:c8987e7bca6ab22a166ca13c5d2fe8e02fbb6e23"
meta-intel = "dunfell:8eec569734cb1ff9c0905f4a88f9b4bfc89ed9fc"
meta-up-board = "dunfell:23b0e460c068db2c69d844681a23b8febe9b95a0"
meta-oe
meta-python
meta-networking
meta-filesystems = "dunfell:ec978232732edbdd875ac367b5a9c04b881f2e19"
meta-virtualization = "dunfell:c5f61e547b90aa8058cf816f00902afed9c96f72"
Initialising tasks: 100% |################################################################################################################################################################| Time: 0:00:04
Sstate summary: Wanted 2 Found 0 Missed 2 Current 3705 (0% match, 99% complete)
NOTE: Executing Tasks
NOTE: Tasks Summary: Attempted 9002 tasks of which 9002 didn't need to be rerun and all succeeded.
ERROR: Execution of event handler 'toaster_buildhistory_dump' failed
Traceback (most recent call last):
File "/media/drive2/yocto2/poky/meta/classes/toaster.bbclass", line 273, in toaster_buildhistory_dump(e=<bb.event.BuildCompleted object at 0x7fe8e830f6d8>):
files[target]['files'] = []
> with open("%s/installed-package-sizes.txt" % installed_img_path, "r") as fin:
for line in fin:
FileNotFoundError: [Errno 2] No such file or directory: '/media/drive2/yocto2/poky/build/buildhistory/images/up_board/glibc/upboard-image-sato/installed-package-sizes.txt'
Summary: There was 1 ERROR message shown, returning a non-zero exit code.
Is there any way to un-do whatever changes toaster did to the project? Uninstall toaster? Or should I start the project over again?

Is there any patches for tegra-minimal-initramfs.bb to prevent this error?

I was trying to build core-image-minimal for Jetson TX2 following the instructions from this link
https://developer.ridgerun.com/wiki/index.php?title=Yocto_Support_for_NVIDIA_Jetson_Platforms-Old .
My build configuration is
Build Configuration:
BB_VERSION = "1.46.0"
BUILD_SYS = "x86_64-linux"
NATIVELSBSTRING = "universal"
TARGET_SYS = "aarch64-poky-linux"
MACHINE = "jetson-tx2"
DISTRO = "poky"
DISTRO_VERSION = "3.1.5"
TUNE_FEATURES = "aarch64 armv8a crc"
TARGET_FPU = ""
meta-tegra = "dunfell-l4t-r32.4.3:3b4df1ac05e9f96e0363630c036f5445800af435"
meta
meta-poky
meta-yocto-bsp = "dunfell:6e89d668246fb37b2217aae7ae57390e793696d8"
But I got this error related to tegra-minimal-initramfs recipe.
WARNING: tegra-minimal-initramfs-1.0-r0 do_image_complete: KeyError in .
ERROR: tegra-minimal-initramfs-1.0-r0 do_image_complete: Error executing a python function in exec_python_func() autogenerated:
The stack trace of python calls that resulted in this exception/failure was:
File: 'exec_python_func() autogenerated', lineno: 2, function: <module>
0001:
*** 0002:sstate_report_unihash(d)
0003:
File: '/home/pc_1175/yocto-tegra/poky-dunfell/meta/classes/sstate.bbclass', lineno: 844, function: sstate_report_unihash
0840: report_unihash = getattr(bb.parse.siggen, 'report_unihash', None)
0841:
0842: if report_unihash:
0843: ss = sstate_state_fromvars(d)
*** 0844: report_unihash(os.getcwd(), ss['task'], d)
0845:}
0846:
0847:#
0848:# Shell function to decompress and prepare a package for installation
File: '/home/pc_1175/yocto-tegra/poky-dunfell/bitbake/lib/bb/siggen.py', lineno: 555, function: report_unihash
0551:
0552: if "." in self.method:
0553: (module, method) = self.method.rsplit('.', 1)
0554: locs['method'] = getattr(importlib.import_module(module), method)
*** 0555: outhash = bb.utils.better_eval('method(path, sigfile, task, d)', locs)
0556: else:
0557: outhash = bb.utils.better_eval(self.method + '(path, sigfile, task, d)', locs)
0558:
0559: try:
File: '/home/pc_1175/yocto-tegra/poky-dunfell/bitbake/lib/bb/utils.py', lineno: 420, function: better_eval
0416: if extraglobals:
0417: ctx = copy.copy(ctx)
0418: for g in extraglobals:
0419: ctx[g] = extraglobals[g]
*** 0420: return eval(source, ctx, locals)
0421:
0422:#contextmanager
0423:def fileslocked(files):
0424: """Context manager for locking and unlocking file locks."""
File: '<string>', lineno: 1, function: <module>
File "<string>", line 1, in <module>
File: '/home/pc_1175/yocto-tegra/poky-dunfell/meta/lib/oe/sstatesig.py', lineno: 593, function: OEOuthashBasic
0589:
0590: update_hash("\n")
0591:
0592: # Process this directory and all its child files
*** 0593: process(root)
0594: for f in files:
0595: if f == 'fixmepath':
0596: continue
0597: process(os.path.join(root, f))
File: '/home/pc_1175/yocto-tegra/poky-dunfell/meta/lib/oe/sstatesig.py', lineno: 553, function: process
0549: add_perm(stat.S_IXOTH, 'x')
0550:
0551: if include_owners:
0552: try:
*** 0553: update_hash(" %10s" % pwd.getpwuid(s.st_uid).pw_name)
0554: update_hash(" %10s" % grp.getgrgid(s.st_gid).gr_name)
0555: except KeyError:
0556: bb.warn("KeyError in %s" % path)
0557: raise
Exception: KeyError: 'getpwuid(): uid not found: 1000'
ERROR: Logfile of failure stored in: /home/pc_1175/yocto-tegra/build/tmp/work/jetson_tx2-poky-linux/tegra-minimal-initramfs/1.0-r0/temp/log.do_image_complete.23961
ERROR: Task (/home/pc_1175/yocto-tegra/meta-tegra/recipes-core/images/tegra-minimal-initramfs.bb:do_image_complete) failed with exit code '1'
The problem is that I have built the same image before with the same packages and I didn't get this error.
When I added TEGRA_INITRAMFS_INITRD = "0" to local.conf file I didn't get this error but I'm wondering if it can affect my system.
TLDR;
Just clean image recipe working directory with bitbake -c cleansstate tegra-minimal-initramfs and also bitbake -c cleansstate core-image-minimal just in case. Than build should work, but it may be not reproducible. That is, maybe you'll have to call this two cleansstate commands before building images (tegra-minimal-initramfs and core-image-minimal) every time.
I got same issue when migrated my project (not connected with Tegra) to Yocto 3.2. The issue was with pseudo - it is a fakeroot-tool, that is used in Yocto for generating rootfs with right files permissions (you run bitbake as an ordinary user, not root, but get rootfs with all files belonging to root). Here is bug i've posted with my patch.
But you are using Yocto 3.1.5 as i see, so your issue is different. The core reason is that during some package build (the one that was excluded by TEGRA_INITRAMFS_INITRD = "0") pseudo remembered, that some file should belong to user 1000, but during building tegra-minimal-initramfs (generation of initramfs) user 1000 was not found in initramfs itself... because there are only root and some basic Linux users.
To your question, may anything break if you leave TEGRA_INITRAMFS_INITRD = "0". Likely yes. Here is where this variable is applied. It is used during U-Boot build and looks like it turns off initramfs usage at all. So with TEGRA_INITRAMFS_INITRD = "0" your final image wouldn't include initramfs file. If the device has some fallback mechanism to boot without initramfs, seems you are ok. If not - try cleansstate.

ERROR: blob.download_to_filename, return an empty file and raise error

I'm trying to download the *.tflite model on Google Cloud Storage to my Rapberry Pi 3B+ using the code as follows:
export_metadata = response.metadata
export_directory = export_metadata.export_model_details.output_info.gcs_output_directory
model_dir_remote = export_directory + remote_model_filename # file path on Google Cloud Storage
model_dir = os.path.join("models", model_filename) # file path supposed to store locally
blob = bucket.blob(model_dir_remote)
blob.download_to_filename(model_dir)
However, this returns an empty file in my target directory locally, and meanwhile, raise an error:
# ERROR: google.resumable_media.common.InvalidResponse: ('Request failed with status code', 404,
# 'Expected one of', <HTTPStatus.OK: 200>, <HTTPStatus.PARTIAL_CONTENT: 206>)
# ERROR: google.api_core.exceptions.NotFound: 404
# GET https://storage.googleapis.com/download/storage/v1/b/ao-example/o/
# gs%3A%2F%2Fao-example%2Fmodel-export%2Ficn%2Fedgetpu_tflite-Test_Model12-2020-11-16T07%3A54%3A27.187Z%2Fedgetpu_model.tflite?alt=media:
# ('Request failed with status code', 404, 'Expected one of', <HTTPStatus.OK: 200>, <HTTPStatus.PARTIAL_CONTENT: 206>)
I guaranteed the corresponding authority to the service account. What confuses me is that when I use gsutil command, it works:
gsutil cp gs://ao-example/model-export/icn/edgetpu_model.tflite models/
Is anyone encountering the same problem? Is there any error in my code? Your help will be greatly appreciated!
I used the following code:
from google.cloud import storage
from google.cloud import automl
from google.cloud.storage import Blob
client = storage.Client(project="semesterproject-294707")
bucket_name = 'ao-example'
bucket = client.get_bucket(bucket_name)
model_dir_remote = "gs://ao-example/model-export/icn/edgetpu_tflite-Test_Model13-2020-11-18T15:03:42.620Z/edgetpu_model.tflite"
blob = Blob(model_dir_remote, bucket)
with open("models/edgetpu_model13.tflite", "wb") as file_obj:
blob.download_to_file(file_obj)
This raises the same error, and return an empty file also... Still, I can use gsutil cp command to download the file...
(Edited on 06/12/2020)
The info of model generated:
export_model_details {
output_info {
gcs_output_directory: "gs://ao-example/model-export/icn/edgetpu_tflite-gc14-2020-12-06T14:43:18.772911Z/"
}
}
model_gcs_path: 'gs://ao-example/model-export/icn/edgetpu_tflite-gc14-2020-12-06T14:43:18.772911Z/edgetpu_model.tflite'
model_local_path: 'models/edgetpu_model.tflite'
It still encounters the error:
google.api_core.exceptions.NotFound: 404 GET https://storage.googleapis.com/download/storage/v1/b/ao-example/o/gs%3A%2F%2Fao-example%2Fmodel-export%2Ficn%2Fedgetpu_tflite-gc14-2020-12-06T14%3A43%3A18.772911Z%2Fedgetpu_model.tflite?alt=media: ('Request failed with status code', 404, 'Expected one of', <HTTPStatus.OK: 200>, <HTTPStatus.PARTIAL_CONTENT: 206>)
Still, when I use gsutil cp command, it works:
gsutil cp model_gcs_path model_local_path
Edited on 12/12/2020
Soni Sol's method works! Thanks!!
I believe you should use something like this:
from google.cloud.storage import Blob
client = storage.Client(project="my-project")
bucket = client.get_bucket("my-bucket")
blob = Blob("file_on_gcs", bucket)
with open("/tmp/file_on_premise", "wb") as file_obj:
blob.download_to_file(file_obj)
Blobs / Objects
When using the client libraries the gs:// is not needed, also the name of the bucket is being passed twice on This Question they had a similar issue and got corrected.
please try with the following code:
from google.cloud import storage
from google.cloud import automl
from google.cloud.storage import Blob
client = storage.Client(project="semesterproject-294707")
bucket_name = 'ao-example'
bucket = client.get_bucket(bucket_name)
model_dir_remote = "model-export/icn/edgetpu_tflite-Test_Model13-2020-11-18T15:03:42.620Z/edgetpu_model.tflite"
blob = Blob(model_dir_remote, bucket)
with open("models/edgetpu_model13.tflite", "wb") as file_obj:
blob.download_to_file(file_obj)

Problem with connect facebookads library for extract data from Facebook with Marketing API using Python

I want to get info about ad campaign. And I start from this code to get campaign name. and I get this error :
Traceback (most recent call last):
File "C:/Users/win7/PycharmProjects/API_Facebook/dd.py", line 2, in <module>
from facebookads.adobjects.adaccount import AdAccount
File "C:\Users\win7\AppData\Local\Programs\Python\Python37-32\lib\site-packages\facebookads\adobjects\adaccount.py", line 1582
def get_insights(self, fields=None, params=None, async=False, batch=None, pending=False):
^
SyntaxError: invalid syntax
^
What is may be reason? and if you want, can give code examples how can I get more info about campaign?
Click here to view image: code and error
If you're using Python 3.7, use async_, not only async.
import os, re
path = r"path facebookads"
python_files = []
for dirpath, dirnames, filenames in os.walk(path):
for filename in filenames:
if filename.endswith(".py"):
python_files.append(os.path.join(dirpath, filename))
for dirpath, dirnames, filenames in os.walk(path):
for filename in filenames:
if filename.endswith(".py"):
python_files.append(os.path.join(dirpath, filename))
for python_file in python_files:
with open(python_file, "r") as f:
text = f.read()
revised_text = re.sub("async", "async_", text)
with open(python_file, "w") as f:
f.write(revised_text)
They updated and renamed the library, now it's facebook_ads and async argument was renamed to is_async
Try updating facebookads:
$ pip install --upgrade facebookads
I'm using facebookads==2.11.4.
More info: https://pypi.org/project/facebookads/

logstash(2.3.2) gzip codec not work

I'm using logstash(2.3.2) to read gz file by using gzip_lines codec.
The log file example (sample.log) is
127.0.0.2 - - [11/Dec/2013:00:01:45 -0800] "GET /xampp/status.php HTTP/1.1" 200 3891 "http://cadenza/xampp/navi.php" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:25.0) Gecko/20100101 Firefox/25.0"
The command I used to append to a gz file is:
cat sample.log | gzip -c >> s.gz
The logstash.conf is
input {
file {
path => "./logstash-2.3.2/bin/s.gz"
codec => gzip_lines { charset => "ISO-8859-1"}
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
#match => { "message" => "message: %{GREEDYDATA}" }
}
#date {
# match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
#}
}
output {
stdout { codec => rubydebug }
}
I have installed gzip_line plugin with bin/logstash-plugin install logstash-codec-gzip_lines
start logstash with ./logstash -f logstash.conf
When I feed s.gz with
cat sample.log | gzip -c >> s.gz
I expect that the console prints the data. but there is nothing print out.
I have tried it on mac and ubuntu, and get same result.
Is anything wrong with my code?
I checked the code for gzip_lines and it seemed obvious to me that this plugin is not working. At least for version 2.3.2. May be it is outdated. Because it does not implement the methods specified here:
https://www.elastic.co/guide/en/logstash/2.3/_how_to_write_a_logstash_codec_plugin.html
So current internal working is like that:
file input plugin reads file line by line and send it to codec.
gzip_lines codec tryies to create a new GzipReader object with GzipReader.new(io)
It then go through the reader line by line to create events.
Because you specify a gzip file, file input plugin tries to read gzip file as a regular file and sends lines to codec. Codec tries to create a GzipReader with that string and it fails.
You can modify it to work like that:
Create a file that contains list of gzip files:
-- list.txt
/path/to/gzip/s.gz
Give it to file input plugin:
file {
path => "/path/to/list/list.txt"
codec => gzip_lines { charset => "ISO-8859-1"}
}
Changes are:
Open vendor/bundle/jruby/1.9/gems/logstash-codec-gzip_lines-2.0.4/lib/logstash/codecs/gzip_lines.r file. Add register method:
public
def register
#converter = LogStash::Util::Charset.new(#charset)
#converter.logger = #logger
end
And in method decode change:
#decoder = Zlib::GzipReader.new(data)
as
#decoder = Zlib::GzipReader.open(data)
The disadvantage of this approach is it wont tail your gzip file but the list file. So you will need to create a new gzip file and append it to list.
I had a variant of this problem where I needed to decode bytes in a files to an intermediate string to prepare for a process input that only accepts strings.
The fact that encoding / decoding issues were ignored in Pyhton 2 is actually really bad IMHO. you may end up with various corrupt data problems especially if you needed to re-encode the string back into data.
using ISO-8859-1 works for both gz and text files alike. while utf-8 only worked for text files. I haven't tried it for png's yet.
Here's an example of what worked for me
data = os.read(src, bytes_needed)
chunk += codecs.decode(data,'ISO-8859-1')
# do the needful with the chunk....