Dividing large program into subcommands with argparse - command-line

I want to use six subcommands (using subparsers from the argparse library) to divide my large program into smaller independent programs, and be able to run them individually. In other words, I envision running six commands from the command line one after the other, where the results of each command feed in as arguments of the next one. (Or if that is not possible with argparse, then at least some way of running each of the six parts independently).
I had no problem with one parser, but when trying to understand how to use subparsers for this task I found the documentation too confusing.
Currently my code is something like
import argparse
from my_functions import (func_a, func_b, func_c, func_d, func_e, func_f)
parser = argparse.ArgumentParser() # Top level parser
subparsers = parser.add_subparsers()
parser_a = subparsers.add_parser('parser_a', help='parser_a_help')
parser_a.set_defaults(func=func_a)
parser_a.add_argument('a_arg', type=int)
parser_b = subparsers.add_parser('parser_b', help='parser_b_help')
parser_b.set_defaults(func=func_b)
parser_b.add_argument('b_arg', type=int)
parser_c = subparsers.add_parser('parser_c', help='parser_c_help')
parser_c.set_defaults(func=func_c)
parser_c.add_argument('c_arg', type=int)
parser_d = subparsers.add_parser('parser_d', help='parser_d_help')
parser_d.set_defaults(func=func_d)
parser_d.add_argument('d_arg', type=int)
parser_e = subparsers.add_parser('parser_e', help='parser_e_help')
parser_e.set_defaults(func=func_e)
parser_e.add_argument('e_arg', type=int)
parser_f = subparsers.add_parser('parser_f', help='parser_f_help')
parser_f.set_defaults(func=func_f)
parser_f.add_argument('f_arg', type=int)
# Parse arguments
args = parser.parse_args()
args.func(args)
def main(a_arg, b_arg, c_arg, d_arg, e_arg, f_arg):
#Do stuff with these args
if __name__ == "__main__":
main(args.a_arg, args.b_arg, args.c_arg, args.d_arg, args.e_arg, args.f_arg)
So the behavior I want is that on the command line I can type
$ python my_function.py parser_a 3
$ python my_function.py parser_b 5
$ python my_function.py parser_c 8
$ python my_function.py parser_d 150
$ python my_function.py parser_e 42
$ python my_function.py parser_f 2
So that if there's a problem in one subcommand I can run that one independently for debugging.
Any help understanding the logic of what I should be doing would be greatly appreciated. I'm not even sure if the behavior I want is the behavior that I should want.

Related

Add bash script as an entrypoint to Python package with Poetry

Is it possible to add bash script as an entrypoint (console script) to Python package via poetry? It looks like it only accepts python files (see code here).
I want entry.sh to be an entry script
#!/usr/bin/env bash
set -e
echo "Running entrypoint"
via setup.py
entry_points={
"console_scripts": [
"entry=entry.sh",
],
},
On the other hand setuptools seems to be supporting shell scripts (see code here).
Is it possible to include shell script into a package and add it to the entrypoints after installing when working with Poetry?
UPD. setuptools does not support that as well (it generates code below)
def importlib_load_entry_point(spec, group, name):
dist_name, _, _ = spec.partition('==')
matches = (
entry_point
for entry_point in distribution(dist_name).entry_points
if entry_point.group == group and entry_point.name == name
)
return next(matches).load()
globals().setdefault('load_entry_point', importlib_load_entry_point)
Is it design decision? It looks to me that packaging should provide such a feature to deliver complex applications as a single bundle.
So I ended up using this workaround: have my script in place and add it to the bundle via package_data and call it from within Python code which I made as an entrypoint.
import subprocess
def _run(bash_script):
return subprocess.call(bash_script, shell=True)
def entrypoint():
return _run("./scripts/my_entrypoint.sh")
def another_entrypoint_if_needed():
return _run("./scripts/some_other_script.sh")
and pyproject.toml
[tool.poetry.scripts]
entrypoint = 'bash_runner:entrypoint'
another = 'bash_runner:another_entrypoint_if_needed'
Same works for console_scripts in setup.py file.

how to pytest an app that can use ipython embed as arg parameter?

I have a python application that has an option "-y" to end its procedure in a ipython terminal with all objects created ready for an interactive manipulation.
I'm trying to think in how can I design a pytest that could allow me, somehow, to interact with this terminal, to check if objects are there in a python session, exit, and then capture the results for assert (I know how to use capsys for example).
During my attempts (all failed so far) I got a suggestion to use pytest -s option which, obviously, is not my case.
So I have this example:
go_to_python.py
import argparse
import random
parser = argparse.ArgumentParser()
parser.add_argument(
"-y",
"--ipython",
action="store_true",
dest="ipython",
help="start iPython interpreter")
args = parser.parse_args()
if __name__ == "__main__":
randomlist = []
for i in range(0, 5):
n = random.randint(1, 30)
randomlist.append(n)
if args.ipython:
import IPython
IPython.embed(colors="neutral")
How could I create a test that could assert that randomlist is inside the ipython session?

Is there a way to perform a command line screenshot in Redhat without any GUI involved?

I want to be able to take screenshots in Redhat with no GUI involved. I don't have ImageMagic, so I can't use import (which would be perfect). I want to write a script which takes a screenshot every so often without user intervention.
I've tried gnome-panel-screenshot, but it brings up the snapshot GUI.
The script would look something like this (pseudo code):
sleep_time = <mySleepTime>
filename = <myFilename>
set i = 1
while true do
filename = filename + "$i"
<snapshot command> filename
sleep $sleep_time
i = i + 1
end while
Use the import command
$ import -window root -resize 400x300 -delay 200 screenshot.png
It only needs X server running, it won't bring up any interface.
Or using python3 + OpenCV
$ pip3 install python3_xlib python-xlib
$ pip3 install pillow imutils
$ pip3 install opencv-python
$ cat screen.py
import numpy as np
import pyautogui
import imutils
import cv2
image = pyautogui.screenshot()
image = cv2.cvtColor(np.array(image), cv2.COLOR_RGB2BGR)
cv2.imwrite("in_memory_to_disk.png", image)
$ python3 screen.py
It doesn't bring any interface, the script just run and exit

How to solve numpy import error when calling Anaconda env from Matlab

I want to execute a Python script from Matlab (on a Windows 7 machine). The libraries necessary are installed in an Anaconda virtual environment. When running the script from command line, it runs flawlessly.
When calling the script from Matlab as follows:
[status, commandOut] = system('C:/Users/user/AppData/Local/Continuum/anaconda3/envs/tf/python.exe test.py');
or with shell commands, I get an Import Error:
commandOut =
'Traceback (most recent call last):
File "C:\Users\user\AppData\Local\Continuum\anaconda3\envs\tf\lib\site-packages\numpy\core\__init__.py", line 16, in <module>
from . import multiarray
ImportError: DLL load failed: The specified path is invalid.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "test.py", line 2, in <module>
import numpy as np
File "C:\Users\user\AppData\Local\Continuum\anaconda3\envs\tf\lib\site-packages\numpy\__init__.py", line 142, in <module>
from . import add_newdocs
File "C:\Users\user\AppData\Local\Continuum\anaconda3\envs\tf\lib\site-packages\numpy\add_newdocs.py", line 13, in <module>
from numpy.lib import add_newdoc
File "C:\Users\user\AppData\Local\Continuum\anaconda3\envs\tf\lib\site-packages\numpy\lib\__init__.py", line 8, in <module>
from .type_check import *
File "C:\Users\user\AppData\Local\Continuum\anaconda3\envs\tf\lib\site-packages\numpy\lib\type_check.py", line 11, in <module>
import numpy.core.numeric as _nx
File "C:\Users\user\AppData\Local\Continuum\anaconda3\envs\tf\lib\site-packages\numpy\core\__init__.py", line 26, in <module>
raise ImportError(msg)
ImportError:
Importing the multiarray numpy extension module failed. Most
likely you are trying to import a failed build of numpy.
If you're working with a numpy git repo, try `git clean -xdf` (removes all
files not under version control). Otherwise reinstall numpy.
Original error was: DLL load failed: The specified path is invalid.
I already changed the default Matlab Python version to the Anaconda env, but no change:
version: '3.5'
executable: 'C:\Users\user\AppData\Local\Continuum\anaconda3\envs\tf\python.exe'
library: 'C:\Users\user\AppData\Local\Continuum\anaconda3\envs\tf\python35.dll'
home: 'C:\Users\user\AppData\Local\Continuum\anaconda3\envs\tf'
isloaded: 1
Just running my test script without importing numpy works. Reloading numpy (py.importlib.import_module('numpy');) didn't work but threw the same error as before.
Does anyone have an idea how to fix this?
So after corresponding with Matlab support I found out that Matlab depends on the path environment (paths which are deliberately not set when using a virtual environment) and therefore numpy fails to find the necessary paths when called from within Matlab (even if the call contains the path to the virtual environment).
The solution is either to call Matlab from within the virtual environment (via command line) or add the missing paths manually in the path environment.
Maybe this information can help someone else.
First Method
You can change the python interpreter with:
pyversion("/home/nibalysc/Programs/anaconda3/bin/python");
And check it with:
pyversion();
You could also do this in a
startup.m
file in your project folder and every time you start MATLAB from this folder the python interpreter will be changed automatically.
Now you can try to use:
py.importlib.import_module('numpy');
Read up the documentation on how to use the integrated python in MATLAB:
Call user defined custom module
Call modified python module
Alternative Method
Alternative method would be to create a
matlab_shell.sh
file with following content, this is basically the appended code from .bashrc when anaconda is installed and asks you if the installer should modify the .bashrc file:
#!/bin/bash
__conda_setup="$(CONDA_REPORT_ERRORS=false '$HOME/path/to/anaconda3/bin/conda' shell.bash hook 2> /dev/null)"
if [ $? -eq 0 ]; then
\eval "$__conda_setup"
else
if [ -f "$HOME/path/to/anaconda3/etc/profile.d/conda.sh" ]; then
CONDA_CHANGEPS1=false conda activate base
else
\export PATH="$HOME/path/to/anaconda3/bin:$PATH"
fi
fi
unset __conda_setup
# <<< conda init <<<
# >>> conda initialize >>>
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$('$HOME/path/to/anaconda3/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
eval "$__conda_setup"
else
if [ -f "$HOME/path/to/anaconda3/etc/profile.d/conda.sh" ]; then
. "$HOME/path/to/anaconda3/etc/profile.d/conda.sh"
else
export PATH="$HOME/path/to/anaconda3/bin:$PATH"
fi
fi
unset __conda_setup
# <<< conda initialize <<<
conda activate base
eval $2
Then you need to set the MATLAB_SHELL environment variable either before running MATLAB or in MATLAB itself. The best thing in my opinion would be to do it also in the startup.m file like that:
setenv("MATLAB_SHELL", "/path/to/matlab_shell.sh");
Afterwards you can use the system(...) function to run conda python with all your modules installed like that...
String notation:
system("python -c ""python code goes here"");
Char notation:
system('python -c "python code goes here"');
Hope this helps!
Firstly, if you execute your Python script like a regular system command ([status, commandOut] = system('...python.exe test.py'))
the pyversion (and pyenv, since R2019b) got no effect at all. It only matters if you utilize the py. integration, as in the code below (and, in most cases, this is a way better approach).
Currently (I use R2019b update 5) there's a number of pitfalls, that might cause issues similar to yours. I'd recommend to start from the following:
Create a new clean conda environment:
conda create -n test_py36 python=3.6 numpy
Create the following dummy demo1.py:
def dummy_py_method(x):
return x+1
Create the following run_py_code.m:
function run_py_code()
% explicit module import sometimes show more detailed error messages
py.importlib.import_module('numpy');
% to reloads if there would be any changes:
pymodule = py.importlib.import_module('demo1');
py.importlib.reload(pymodule);
% passing data back and forth
x = rand([3 3]);
x_np = py.numpy.array(x);
y_np=pymodule.dummy_py_method(x_np);
y = double(y_np);
disp(y-x);
Create the following before_first_run.m:
setenv('PYTHONUNBUFFERED','1');
setenv('path',['C:\Users\username\Anaconda3\envs\test_py36\Library\bin;'...
getenv('path')]);
pe=pyenv('Version','C:\users\username\Anaconda3\envs\test_py36\pythonw.exe',...
'ExecutionMode','InProcess'...
);
% add "demo1.py" to path
py_file_path = 'W:\tests\Matlab\python_demos\call_pycode\pycode';
if count(py.sys.path,py_file_path) == 0
insert(py.sys.path,int32(0),py_file_path);
end
Run the before_first_run.m first and run the run_py_code.m next.
Notes:
As already mentioned in this answer, one key point is to add the folder, containing the necessary dll files to the %PATH%, before starting python. This could be achieved with setenv from withing Matlab. Usually, the Library\bin is what should be added.
It might be a good idea to try clean officially-supported CPython distribution (e.g. CPython 3.6.8 ). Only install numpy (python -m pip install numpy). To my experience, the setenv is not necessary in this case.
For me, OutOfProcess mode proved to be buggy. Thus, I'd recommend to explicitly setting InProcess mode (for versions before R2019b, the OutOfProcess option is not present, as well as pyenv).
Do not concatenate the two .m files above into one - the py.importlib statements seem to be pre-executed and thus conflict with pyenv.

How to set the default encoding in a buildout script, or during virtualenv creation?

I have a Plone project which is created by a buildout script and needs a default encoding of utf-8. This is usually done in the sitecustomize.py file of the Python installation. Since there is a virtualenv, I'd like to generate this file automatically, to contain something like:
import sys
sys.setdefaultencoding('utf-8')
After generation I have two empty sitecustomize.py files - one in parts/instance/, and one in parts/buildout; but none of these seems to be used (I couldn't find them in sys.path).
I tried zopepy:
>>> from os.path import join, isfile
>>> from pprint import pprint
>>> import sys
>>> pprint([p for p in sys.path
... if isfile(join(p, 'sitecustomize.py'))
... ])
and found another one in my local lib/python2.7/site-packages/ directory which looks good; but it doesn't work:
>>> import sys
>>> sys.getdefaultencoding()
'ascii'
This directory sits near the end of the sys.path, because I needed to add it by an extra-paths entry (to get another historical package).
Any pointers? Thank you!
System information: CentOS 7, Python 2.7.5
Edit:
I deleted them two empty sitecustomize.py files; now I have a default encoding of utf-8 in the zopepy session but still ascii in Plone; this surprises me, because I have in my buildout script:
[zopepy]
recipe=zc.recipe.egg
eggs = ${instance:eggs}
extra-paths = ${instance:extra-paths}
interpreter = zopepy
scripts = zopepy
To debug this, I created a little function which I added to my code, and which displays a little information about relevant modules in the sys.path:
import sys
from os.path import join, isdir, isfile
def sitecustomize_info():
plen = len(sys.path)
print '-' * 79
print 'sys.path has %(plen)d entries' % locals()
for tup in zip(range(1, plen+1), sys.path):
nr, dname = tup
if isdir(dname):
for fname in ('site.py', 'sitecustomize.py'):
if isfile(join(dname, fname)):
print '%(nr)4d. %(dname)s/%(fname)s' % locals()
spname = join(dname, 'site-packages', 'sitecustomize.py')
if isfile(spname):
print '%(nr)4d. %(spname)s' % locals()
else:
print '? %(dname)s is not a directory' % locals()
print '-' * 79
Output:
sys.path has 303 entries
8. /usr/lib64/python2.7/site-packages/sitecustomize.py
295. /opt/zope/instances/wnzkb/lib/python2.7/site-packages/sitecustomize.py
? /usr/lib64/python27.zip is not a directory
298. /usr/lib64/python2.7/site.py
? /usr/lib64/python2.7/lib-tk is not a directory
? /usr/lib64/python2.7/lib-old is not a directory
303. /usr/lib/python2.7/site-packages/sitecustomize.py
All sitecustomize.py files look the same (switching to utf-8), and I didn't tweak site.py (for now; if everything else fails, I might need to.)
If you really want/need to use the sitecustomize.py trick, you could include this part in your buildout:
[fixencode]
recipe = plone.recipe.command
stop-on-error = yes
update-command = ${fixencode:command}
command =
SITE_PACKAGES=$(${buildout:executable} -c \
'from distutils.sysconfig import get_python_lib;print(get_python_lib())')
cat > $SITE_PACKAGES/../sitecustomize.py << EOF
#!${buildout:executable} -S
import sys
sys.setdefaultencoding('utf-8')
EOF
It will be added into the site-packages folder from your virtualenv.
It looks like sitecustomize.py is not found anymore unless placed in the global lib directories (Discussion "deleting setdefaultencoding in site.py is evil" (2009), Tracker ticket "sitecustomize.py not found") - and this was made on purpose (!).
This is to prevent users from overriding the default encoding which might have been adjusted by some library. Libraries shouldn't do that, however.
Thus, whoever needs to set the default encoding is seduced to do it globally, which looks like a very silly idea to me. I'd consider it much more reasonable to set this in my virtual environment.
Unfortunately the sitecustomize.py modules in a virtualenv seem to be silently ignored; but it is possible to edit the local site.py. Here is a little sed script:
# vv-------1------vv vv---2---vv vv--------3------vv vv-----------4------------vv
sed --in-place -e 's,^\( encoding =\) \("ascii"\) \(# Default value\) \(set by _PyUnicode_Init()\),\1 "utf-8" \3 \2 \4,' lib/python2.7/site.py