I have a PyQt5 (5.15.6) application running in Python 3 and want to reference my qss file as such
qss_file = QtCore.QFile("my_app_qss.qss")
However, I have multiple apps that use the same qss file so depending on where I run the app from I need an absolute import rather than a relative import. I would also like to compile any of those apps with pyinstaller and deploy them to another machine. How can I reference this qss file?
example folder structure
main
| - resources/my_app_qss.qss
| - apps/
|--------project1/app1.py
| -------project2/
|-----------------subfolder/app2.py
The issue is that I did not understand that
qss_file = QtCore.QFile("my_app_qss.qss")
Is not a path to a file. It is referencing a file that gets built by pyrcc4 from the .qrc source
Related
Hello i have one more problem with deploying my app by Streamlit. It works localy but when I want to upload it on git hub it doesnt work..Have no idea whats wrong. It seems that there is problem with path to the file:
"File "/app/streamlit/bobrza.py", line 14, in <module>
bobrza_locations = pd.read_csv(location)"
Here is link to my github repo. Will be very very grateful for help. Thank in advance.
https://github.com/Bordonous/streamlit
The problem is you are hard coding the path of the bobrza1.csv and route.csv to the path on your computer so when running the code on a different environment the path in not legal.
The solution is to make location independent from running environment, for this we will use the following:
__file__ variable - the path to the current python module (the .py file).
os.path.dirname() - a function to get directory name from path.
os.path.abspath() - a function to get a normalized absolutized version of path.
os.path.join() - a function to join one or more path components.
Now you need to change your location and location2 variables in the code to the following:
# get the absolute path to the directory contain the .csv file
dir_name = os.path.abspath(os.path.dirname(__file__))
# join the bobrza1.csv to directory to get file path
location = os.path.join(dir_name, 'bobrza1.csv')
# join the route.csv to directory to get file path
location2 = os.path.join(dir_name, 'route.csv')
Resulting in an independent path of the bobrza1.csv and route.csv.
I am working on a project which has multiple modules with their own doxyfiles. My idea is to have a single master doxyfile which can include other private doxyfiles to create a one big documentation of the project. The directory structure looks like following:
MyProject+
|-Private_Prj1+
| |-Doc+
| |-doxyfile_privateprj1
|-Private_Prj2+
|-Doc+
|-doxyfile_privateprj2
|-Doc+
|-doxyfile_myproject(AKA Master Doxyfile)
How can I configure the doxyfile_myproject to include doxyfile_privateprj1 and doxyfile_privateprj2 in such a way that when I run Doxygen on the doxyfile_myproject, it then sequentially runs other doxyfiles?
I tried to train the tesseract 4.1 using OCRD project but after training completed I copied the lang.traineddata but getting above error.
The tesseractWiki page is very confusing to understand asking to use combine_lang_model after making lstmf file. So Actually I have the lstmf file. I created these file by using tif/box pair.
Please help me for further step.
Related discussions:Failed to load any lstm-specific dictionaries for lang xxx
Suppose your training folder like this:
OCRD/makefile
OCRD/data/foo-ground-truth.
You could try as following steps:
Find the WORDLIST_FILE/NUMBERS_FILE/PUNC_FILE in the makefile, and change them to:
WORDLIST_FILE := data/$(MODEL_NAME).wordlist
NUMBERS_FILE := data/$(MODEL_NAME).numbers
PUNC_FILE := data/$(MODEL_NAME).punc
Suppose your base traineddata is eng.traineddata.
2.1 Download the .wordlist/.numbers/.punc files from the langdata_lstm.
2.2 Place them in OCRD/data
2.3 if the MODEL_NAME = foo, rename them as: foo.wordlist, foo.numbers, foo.punc
if you don't have the base traineddata, you could try this too. But if your base traineddata is afr, you should download the files from langdata_lstm/afr.
make training again
The cause of this error:
In OCRD, the default path of the above three files is $ (OUTPUT_DIR) = data / $ (MODEL_NAME), and all files in this path are automatically generated during the training process.
If the variable START_MODEL is not assigned, the makefile will not generate any related files under this path;
If the variable START_MODEL has been assigned, the foo.lstm-number-dawg、foo.lstm-punc-dawg、foo.lstm-word-dawg and so on will be produced in data / $ (MODEL_NAME). But they are not the right one. So there may be a bug in OCRD.
I need to get the path of the file inside the private folder.
On my local machine I was able to get it by using the path "../../../../../", however, when I deployed to meteor server using meteor deploy, it doesn't work anymore. Also I tried to log the current directory using process.cwd() and got the following, which is different from the structure I got on my local machine:
/meteor/containers/3906c248-566e-61b7-4637-6fb724a33c16/bundle/programs/server
The directory logged from my local machine gives:
/Users/machineName/Documents/projectName/.meteor/local/build/programs/server
Note: I am using this path to setup https://www.npmjs.com/package/apn
You can use assets/app/ as the relative path. While this may not make sense on the first look Meteor re-arranges your /private directory to map to assets/app from the /programs/server directory. This is both in development and production.
Basically assume that private/ maps to assets/app/.
Call Assets.absoluteFilePath(assetPath) on one of the assets in the private folder, then chop of the name of the asset file from the string you get back, e.g., assuming you have a file called test.txt in the private folder:
var aFile = 'test.txt';// test.txt is in private folder
var aFilePath = Assets.absoluteFilePath(aFile);
var aFolder = aFilePath.substr(0, aFilePath.length - aFile.length);
console.log(aFolder);
https://docs.meteor.com/api/assets.html#Assets-absoluteFilePath
I have followed instructions to create an .ipk file, the Packages.gz and host them on a web server as a repo. I have set the opkg.conf in my other VM to point to this repo. The other VM is able to update and list the contents of repositories successfully.
But, when I try to install, I get this message. Can you please describe why I am getting this and what needs to be changed?
Collected errors:
* wfopen: /etc/repo/d1/something.py: No such file or directory
* wfopen: /etc/repo/d1/something-else.py: No such file or directory
While creating the .ipk, I had created a folder named data that had a file structure as /etc/repo/d1/ with the file something.py stored at d1 location. I zipped that folder to data.tar.gz. And, then together with control.tar.gz and 'debian-binary`, I created the .ipk.
I followed instructions from here:
http://bitsum.com/creating_ipk_packages.htm
http://www.jumpnowtek.com/yocto/Managing-a-private-opkg-repository.html
http://www.jumpnowtek.com/yocto/Using-your-build-workstation-as-a-remote-package-repository.html
It is very likely that the directory called /etc/repo/d1/ does not exist on the target system. If you create the folder manually, and try installing again, it probably will not fail. I'm not sure how to force opkg to create the empty directory by itself :/
Update:
You can solve this problem using a preinst script. Just create the missing directories on it, like this:
#!/bin/sh
mkdir -p /etc/repo/d1/
# always return 0 if success
exit 0