how to pytest an app that can use ipython embed as arg parameter? - pytest

I have a python application that has an option "-y" to end its procedure in a ipython terminal with all objects created ready for an interactive manipulation.
I'm trying to think in how can I design a pytest that could allow me, somehow, to interact with this terminal, to check if objects are there in a python session, exit, and then capture the results for assert (I know how to use capsys for example).
During my attempts (all failed so far) I got a suggestion to use pytest -s option which, obviously, is not my case.
So I have this example:
go_to_python.py
import argparse
import random
parser = argparse.ArgumentParser()
parser.add_argument(
"-y",
"--ipython",
action="store_true",
dest="ipython",
help="start iPython interpreter")
args = parser.parse_args()
if __name__ == "__main__":
randomlist = []
for i in range(0, 5):
n = random.randint(1, 30)
randomlist.append(n)
if args.ipython:
import IPython
IPython.embed(colors="neutral")
How could I create a test that could assert that randomlist is inside the ipython session?

Related

How to pass extra argument to python unittest?

I want to pass library location while running unittest command. As I have to import library for using. Suppose the library name is some_lib . Same library will be executed on Linux as well as Windows.
Using python version 3.7.11
Used command : python3 -m unittest test_file.py lib_location
Details of test file.
import sys
sys.path.append(sys.argv[1]) # Hard code of path works fine.
import some_lib
import unittest
class TestCasesForSerializePy(unittest.TestCase):
#classmethod
def setUpClass(self):
self.archive_handler = some_lib.open('./test.archive')
def test_existing_archive(self):
self.assertTrue(self.archive_handler.isOpen())
if __name__ == '__main__':
# unittest.main(argv = [sys.argv[0]])
# sys.argv.pop()
unittest.main()
Error: ModuleNotFoundError: No module named 'C:/REC_158/build/lib'
Tried different approach as available on google.
Approach 1:
sys.argv.pop()
unittest.main()
Approach 2:
del sys.argv[1:]
unittest.main()
Approach 3:
unittest.main(argv=[sys.argv[0]]

How do you embed an ipython console with exec_lines?

I'm trying to embed an ipython console into my command line application.
I have the following:
import IPython
from traitlets.config import Config
c = Config()
c.InteractiveShellApp.exec_lines = [
'import matplotlib.pyplot as plt',
'%matplotlib',
]
return IPython.start_ipython(config=c, user_ns=globals())
However, it seems to completely ignore the "exec_lines" part since plt is not available.
See: Can you specify a command to run after you embed into IPython?
IPython.start_ipython(config=c, user_ns=locals())

Is there a way to start mitmproxy v.7.0.2 programmatically in the background?

Is there a way to start mitmproxy v.7.0.2 programmatically in the background?
ProxyConfig and ProxyServer have been removed since version 7.0.0, and the code below isn't working.
from mitmproxy.options import Options
from mitmproxy.proxy.config import ProxyConfig
from mitmproxy.proxy.server import ProxyServer
from mitmproxy.tools.dump import DumpMaster
import threading
import asyncio
import time
class Addon(object):
def __init__(self):
self.num = 1
def request(self, flow):
flow.request.headers["count"] = str(self.num)
def response(self, flow):
self.num = self.num + 1
flow.response.headers["count"] = str(self.num)
print(self.num)
# see source mitmproxy/master.py for details
def loop_in_thread(loop, m):
asyncio.set_event_loop(loop) # This is the key.
m.run_loop(loop.run_forever)
if __name__ == "__main__":
options = Options(listen_host='0.0.0.0', listen_port=8080, http2=True)
m = DumpMaster(options, with_termlog=False, with_dumper=False)
config = ProxyConfig(options)
m.server = ProxyServer(config)
m.addons.add(Addon())
# run mitmproxy in backgroud, especially integrated with other server
loop = asyncio.get_event_loop()
t = threading.Thread( target=loop_in_thread, args=(loop,m) )
t.start()
# Other servers, such as a web server, might be started then.
time.sleep(20)
print('going to shutdown mitmproxy')
m.shutdown()
from BigSully's gist
You can put your Addon class into your_script.py and then run mitmdump -s your_script.py. mitmdump comes without the console interface and can run in the background.
We (mitmproxy devs) officially don't support manual instantiation from Python anymore because that creates a massive amount of support burden for us. If you have some Python experience you can probably find your way around.
What if my addon has additional dependencies?
Approach 1: pip install mitmproxy is still perfectly supported and gets you the same functionality as the standalone binaries. Bonus tip: You can run venv/bin/mitmproxy or venv/Scripts/mitmproxy.exe to invoke mitmproxy in your virtualenv without having your virtualenv activated.
Approach 2: You can install mitmproxy with pipx and then run pipx inject mitmproxy <your dependency name>. See https://docs.mitmproxy.org/stable/overview-installation/#installation-from-the-python-package-index-pypi for details.
How can I debug mitmproxy itself?
If you are debugging from the command line (be it print statements or pdb), the easiest approach is to run mitmdump instead of mitmproxy, which provides the same functionality minus the console interface. Alternatively, you can use PyCharm's remote debug functionality, which also works while the console interface is active (https://github.com/mitmproxy/mitmproxy/blob/main/examples/contrib/remote-debug.py).
This example below should work fine with mitmproxy v7
from mitmproxy.tools import main
from mitmproxy.tools.dump import DumpMaster
options = main.options.Options(listen_host='0.0.0.0', listen_port=8080)
m = DumpMaster(options=options)
# the rest is same in the previous versions
from mitmproxy.addons.proxyserver import Proxyserver
from mitmproxy.options import Options
from mitmproxy.tools.dump import DumpMaster
options = Options(listen_host='127.0.0.1', listen_port=8080, http2=True)
m = DumpMaster(options, with_termlog=True, with_dumper=False)
m.server = Proxyserver()
m.addons.add(
// addons here
)
m.run()
Hi, I think that should do it

Dividing large program into subcommands with argparse

I want to use six subcommands (using subparsers from the argparse library) to divide my large program into smaller independent programs, and be able to run them individually. In other words, I envision running six commands from the command line one after the other, where the results of each command feed in as arguments of the next one. (Or if that is not possible with argparse, then at least some way of running each of the six parts independently).
I had no problem with one parser, but when trying to understand how to use subparsers for this task I found the documentation too confusing.
Currently my code is something like
import argparse
from my_functions import (func_a, func_b, func_c, func_d, func_e, func_f)
parser = argparse.ArgumentParser() # Top level parser
subparsers = parser.add_subparsers()
parser_a = subparsers.add_parser('parser_a', help='parser_a_help')
parser_a.set_defaults(func=func_a)
parser_a.add_argument('a_arg', type=int)
parser_b = subparsers.add_parser('parser_b', help='parser_b_help')
parser_b.set_defaults(func=func_b)
parser_b.add_argument('b_arg', type=int)
parser_c = subparsers.add_parser('parser_c', help='parser_c_help')
parser_c.set_defaults(func=func_c)
parser_c.add_argument('c_arg', type=int)
parser_d = subparsers.add_parser('parser_d', help='parser_d_help')
parser_d.set_defaults(func=func_d)
parser_d.add_argument('d_arg', type=int)
parser_e = subparsers.add_parser('parser_e', help='parser_e_help')
parser_e.set_defaults(func=func_e)
parser_e.add_argument('e_arg', type=int)
parser_f = subparsers.add_parser('parser_f', help='parser_f_help')
parser_f.set_defaults(func=func_f)
parser_f.add_argument('f_arg', type=int)
# Parse arguments
args = parser.parse_args()
args.func(args)
def main(a_arg, b_arg, c_arg, d_arg, e_arg, f_arg):
#Do stuff with these args
if __name__ == "__main__":
main(args.a_arg, args.b_arg, args.c_arg, args.d_arg, args.e_arg, args.f_arg)
So the behavior I want is that on the command line I can type
$ python my_function.py parser_a 3
$ python my_function.py parser_b 5
$ python my_function.py parser_c 8
$ python my_function.py parser_d 150
$ python my_function.py parser_e 42
$ python my_function.py parser_f 2
So that if there's a problem in one subcommand I can run that one independently for debugging.
Any help understanding the logic of what I should be doing would be greatly appreciated. I'm not even sure if the behavior I want is the behavior that I should want.

IPython Notebook: Open/select file with GUI (Qt Dialog)

When you perform the same analysis in a notebook on different data files, may be handy to graphically select a data file.
In my python scripts I usually implement a QT dialog that returns the file-name of the selected file:
from PySide import QtCore, QtGui
def gui_fname(dir=None):
"""Select a file via a dialog and return the file name.
"""
if dir is None: dir ='./'
fname = QtGui.QFileDialog.getOpenFileName(None, "Select data file...",
dir, filter="All files (*);; SM Files (*.sm)")
return fname[0]
However, running this function from an notebook
full_fname = gui_fname()
causes the kernel to die (and restart):
Interestingly, puttying this 3 command in 3 separate cells works
%matplotlib qt
full_fname = gui_fname()
%matplotlib inline
but when I put those commands in one single cell the kernel dies again.
This prevents to create a function like gui_fname_ipynb() that transparently allows selecting a file with a GUI.
For convenience, I created a notebook illustrating the problem:
Open/select file with GUI (Qt Dialog)
Any suggestion on how to execute a dialog for file selection from within an IPython Notebook?
Using Anaconda 5.0.0 on windows (Python 3.6.2, IPython 6.1.0), the following two options are both working for me.
OPTION 1: Entirely in a Jupyter notebook:
CELL 1:
%gui qt
from PyQt5.QtWidgets import QFileDialog
def gui_fname(dir=None):
"""Select a file via a dialog and return the file name."""
if dir is None: dir ='./'
fname = QFileDialog.getOpenFileName(None, "Select data file...",
dir, filter="All files (*);; SM Files (*.sm)")
return fname[0]
CELL 2:
gui_fname()
This is working for me but it seems a bit...fragile. If I combine these two things into the same cell, it crashes. Or if I omit the %gui qt, it crashes. If I "restart kernel and run all cells", it doesn't work. So I kinda like this other option...
MORE RELIABLE OPTION: Separate script that opens dialog box in a new process
(Based on mkrog code here.)
PUT THE FOLLOWING IN A SEPARATE PYTHON SCRIPT CALLED blah.py:
from sys import executable, argv
from subprocess import check_output
from PyQt5.QtWidgets import QFileDialog, QApplication
def gui_fname(directory='./'):
"""Open a file dialog, starting in the given directory, and return
the chosen filename"""
# run this exact file in a separate process, and grab the result
file = check_output([executable, __file__, directory])
return file.strip()
if __name__ == "__main__":
directory = argv[1]
app = QApplication([directory])
fname = QFileDialog.getOpenFileName(None, "Select a file...",
directory, filter="All files (*)")
print(fname[0])
...AND IN YOUR JUPYTER NOTEBOOK
import blah
blah.gui_fname()
I have a universal code where it does its job without any problem. Here is my sugestion:
try:
from tkinter import Tk
from tkFileDialog import askopenfilenames
except:
from tkinter import Tk
from tkinter import filedialog
Tk().withdraw() # we don't want a full GUI, so keep the root window from appearing
filenames = filedialog.askopenfilenames() # show an "Open" dialog box and return the path to the selected file
print (filenames)
hope it can be useful
This behaviour was a bug in IPython:
https://github.com/ipython/ipython/issues/4997
that was fixed here:
https://github.com/ipython/ipython/pull/5077
The function to open a gui dialog should work on current master and on the oncoming 2.0 release.
To date, the last 1.x version (1.2.1) does not include a backport of the fix.
EDIT: The example code still crashes IPython 2.x, see this issue.