Is there any way to import all site MySite.zexp in Zope (ZMI) with use of a command line or programatically, without using the web interface? I am using Plone3.1
I think (based upon a little grepping in buildout-cache/eggs/Zope2*) the importing process as triggered through the ZMI will end up calling
security.declareProtected(import_export_objects, 'manage_importObject')
def manage_importObject(self, file, REQUEST=None, set_owner=1):
"""Import an object from a file"""
from Zope2-*.egg/OFS/ObjectManager.py.
Copy your file MySite.zexp into the import folder of your plone instance (for Plone 3.1, it is probably located in ${PLONE_FOLDER}/parts/instance/import, or just look for the import folder using the find command). Then, use the following command line on the machine where the Zope server is running in order to import your zexp file into your ZODB:
$ wget http://admin:password#localhost:8080/manage_importObject?file=MySite.zexp
where admin and password are your admin user login and password respectively.
Related
when I attempt to export data (I want to export all my requests and import them on a different PC) as per the manual (https://docs.insomnia.rest/insomnia/import-export-data), I keep failing to achieve that. I suppose it should store the export on my hard drive (Linux, so in my Home directory), but when I run the export, I only see an empty folder: Empty Home folder.
When I try to store the exported data again, I already can see the previous stored export. However, when I attempt to locate the file on my hard drive using find command, it comes up empty. It appears as if Insomnia is storing the export on some kind of a virtual drive that I can't actually access. I couldn't find anything about this issue online, the few articles related to Insomnia export implicitly suggest that the export gets automatically stored on the real hard drive. Unfortunately, that is not my case. Also, when I open the import dialog on the target PC, it also opens an empty Home folder, so the problem is not restricted just to one PC.
Please, how do I get the export to work with my normal hard drive? Thanks a lot in advance!
https://github.com/flathub/rest.insomnia.Insomnia/issues/4
"It seems like the flatpak exec command should include the additional argument --filesystem=home.
A temporary workaround is to edit the file at /var/lib/flatpak/[...]/rest.insomnia.Insomnia.desktop and add the argument to the Exec section."
[Desktop Entry]
Name=Insomnia
Exec=/usr/bin/flatpak run --branch=stable --arch=x86_64 --filesystem=home --command=/app/bin/insomnia --file-forwarding rest.insomnia.Insomnia --no-sandbox ##u %U ##
...
Note I added the --filesystem=home argument to the entry file."
Then restart the PC
I'm installing the Algolia extension for Firestore. Setup works just fine and it updates indices on add delete and update. But now I want to backfill it with existing data.
The following steps are provided in the setup guide but I have no clue on HOW to run that script. I've tried pasting it directly in node shell and powershell, adding it to a js or ps1 file and running that but I don't know what kind of script this is.
How do I run this script? (I have a service account json next to it)
It's bash...
It works when pasted directly in bash with spaces on each newline after the line terminator. Or as a .sh file from the commandline.
#!/bin/bash
LOCATION=europe-west3\
ALGOLIA_APP_ID=xxx\
ALGOLIA_API_KEY=xxx\
ALGOLIA_INDEX_NAME=organizations\
COLLECTION_PATH=organizations\
FIELDS=name,address,city\
GOOGLE_APPLICATION_CREDENTIALS=./xxx.json\
npx firestore-algolia-search
After installing Spark I am trying to run PySpark from the installation folder:
opt/spark/bin/pyspark
But I get the following errors:
opt/spark/bin/pyspark: line 24: /opt/spark/bin/load-spark-env.sh: No such file or directory
opt/spark/bin/pyspark: line 68: /opt/spark/bin/spark-submit: No such file or directory
opt/spark/bin/pyspark: line 68: exec: /opt/spark/bin/spark-submit: cannot execute: No such file or directory
Why is this happening when I can see these items in their respective directories? I'm also trying to get PySpark to run standalone as a command, but I'd imagine that I must solve the former problem first.
I am running this on macOS.
This error indicates that SPARK_HOME is not set. Try this:
export SPARK_HOME=/opt/spark
pyspark
FYI, it is strongly recommended to install software on mac OS using a package manager, like https://brew.sh
This is the configuration:
export SPARK_HOME=<YOUR-PATH>/spark-2.4.4-bin-hadoop2.7
export PYTHONPATH=$SPARK_HOME/python:$PYTHONPATH
export PYTHONPATH=$SPARK_HOME/python/lib/py4j-0.10.7-src.zip:$PYTHONPATH
And if you are thinking to use notebook as well:
export PYSPARK_DRIVER_PYTHON="jupyter"
export PYSPARK_DRIVER_PYTHON_OPTS="notebook"
export PYSPARK_PYTHON=python3
export PATH=$SPARK_HOME:$PATH:~/.local/bin:$JAVA_HOME/bin:$JAVA_HOME/jre/bin
I am not understanding how to properly run a simple test(feature file and python file)
with the library pytest-bdd.
From the official documentation, I can't understand what command to issue to run a test.
I tried using pytest command, but I saw the NO test ran.
Do I need to use another library behave to run a feature file?
I figured out trying for 2 days,that ,
for running a pytest-bdd test, there are certain requirements, at least in my view.
put both the feature file and python file in the same directory (maybe this can be changed with configuration files)
the python file name needs to start with test_
the python file needs to contain a method of which name will start with test_
the method starting with test_ , need to be assigned to the #scenario sentence
to run the test, issue pytest command in the same directory(maybe it is also configurable)
After issuing you will only see the method with the name starting with test_ has passed, but all the tests actually ran. To test, you can assert False in any #when or #then annotated method, it will throw errors.
The system contained : pytest-bdd==3.0.2 (copied from pip freeze output)
Features files and python files can be placed in different folders using the bdd_features_base_dir hook provided by pytest-bdd; I think it is better having features files in different folders too.
Here you can see a working example (a simple hello world BDD test):
https://github.com/davidemoro/pytest-play-docker/tree/master/tests
https://github.com/davidemoro/pytest-play-docker/blob/master/tests/pytest.ini (see bdd_features_base_dir in [pytest] section)
https://github.com/davidemoro/pytest-play-docker/tree/master/tests/bdd
If you want to try out pytest-bdd without installation you can use Docker. Create a folder with inside your pytest BDD files and if you want a separate features folder targeted in bdd_features_base_dir and run:
docker run --rm -it -v $(pwd):/src davidemoro/pytest-play:latest
I've found out, that in the python file you don't have to put:
the method starting with test_ , need to be assigned to the #scenario sentence
You can just add: scenarios("") - to allow the tests to be started, which are using steps defined in this specific python file.
Remember to import scenarios!: from pytest_bdd import scenarios
Example:
Code example
Command..
pytest -v path_to_test_file.py
Things to note here..
Check format of feature file as filename.feature
Always __init__ modules, otherwise test-runner will not find test files
Glue right step definitions to test function
Add feature in features module
If you are using python3 execute test with python3
So,
python3 -m pytest -v path_to_test_file.py
Documentation
https://pytest-bdd.readthedocs.io/en/stable/#
I have made some ssis packages dtsx into my local system, and wanted to execute using a powershell script. I tried this code
dtexec /File c:\ssisExample.dts
This resulted in an error of
unable to load the package as XML because of package does not have a valid XML format
First Scenario: Directly executing a file using command in PowerShell
1) Dtexec /FILE ‘\FILE_PATH_NAME\ssisPackage.dtsx’
Errors recorded:
1) The XML is not in correct format/Unable to load the packages
2) Specified File path is not proper
3) At least one of the DTS, SQL, ISServer or File options must be specified
Resolutions
1. Make sure you put path in single quotes as dtexec /FILE
‘FILE_PATH/ssisPackage.dtsx’. Copy the path from the properties
under SSIS package which is created in Visual Studio.
2. Give access to SSIS packages to be remotely acess from a third
party. For that, run Dcomcnfg.exe (requires Local Administrator)
a) Go to Component Services->Computers->DCOM Config->Microsoft SQL Server Integration Services 13.0 (whatever version is installed).
b) Right click ->properties->Security tab->Launch and Activation permission -> Check remote launch and remote activation
c) Same for Access Permission
3. Make sure that system has Microsoft.SqlServer.ManagedDTS package
a) To check that try going to folder In run command C:\Windows\assembly\gac_msil
b) Move to folder name Microsoft.SqlServer.ManagedDTS and check for the package versionr
c) Once done try giving access to these DTS packages
d) Run Dcomcnfg.exe (requires Local Administrator)
e) Go to Component Services->DCOM Config->MsDtsServer100
f) Right click to properties and onto the security tab press Edit for Launch and Activation Permission. Allow for Remote Launch and Remote Activation. Close
g) Press Edit for Access Permission -> Allow Remote Access
Your package should have the .dtsx file extension. Try this
dtexec /File c:\ssisExample.dtsx
/De[crypt] password !
If you saved your package on local file system and used the wizard to create them MS tools usually encrypt the password on file. make sure decrypt the password with /de argument
You should try this it works for me.
EXEC xp_cmdshell 'dtexec /f "c:\ssisExample.dtsx"'