Is there any way to make IPython's logging capability include output as well as input?
This is what a log file looks like currently:
#!/usr/bin/env python
# 2012-08-06.py
# IPython automatic logging file
# 12:02
# =================================
print "test"
I'd like to have one more line show up:
#!/usr/bin/env python
# 2012-08-06.py
# IPython automatic logging file
# 12:02
# =================================
print "test"
# test
(the # is because I assume that is needed to prevent breaking IPython's logplay feature)
I suppose this is possible using IPython notebooks, but on at least one machine I need this for, I'm limited to ipython 0.10.2.
EDIT: I'd like to know how to set this up automatically, i.e. within the configuration file. Right now my config looks like
from time import strftime
import os
logfilename = strftime('ipython_log_%Y-%m-%d')+".py"
logfilepath = "%s/%s" % (os.getcwd(),logfilename)
file_handle = open(logfilepath,'a')
file_handle.write('########################################################\n')
out_str = '# Started Logging At: '+ strftime('%Y-%m-%d %H:%M:%S\n')
file_handle.write(out_str)
file_handle.write('########################################################\n')
file_handle.close()
c.TerminalInteractiveShell.logappend = logfilepath
c.TerminalInteractiveShell.logstart = True
but specifying c.TerminalInteractiveShell.log_output = True seems to have no affect
There's the -o option for %logstart:
-o: log also IPython's output. In this mode, all commands which
generate an Out[NN] prompt are recorded to the logfile, right after
their corresponding input line. The output lines are always
prepended with a '#[Out]# ' marker, so that the log remains valid
Python code.
ADDENDUM: If you are in an interactive ipython session for which logging has already been started, you must first stop logging and then restart:
In [1]: %logstop
In [2]: %logstart -o
Activating auto-logging. Current session state plus future input saved.
Filename : ./ipython.py
Mode : backup
Output logging : True
Raw input log : False
Timestamping : False
State : active
Observe that, after the restart, "Output Logging" is now "True".
Related
I'm trying to use flake8 as the default python linter using python-language-server on neovim v0.5.
python-lsp documentation says to set pylsp.configurationSources to ['flake8'], but doesn't specify which file to edit.
Where does the python-lsp-server config file reside?
According to flake8 documentation, the location of flake8 config varies based on systems, on Linux and Mac, it is ~/.config/flake8, and for Windows, it is $HOME\.flake8 ($HOME is like C:\\Users\sigmavirus24). The content should be in INI format:
[flake8]
max-line-length = 100
max-complexity = 30
ignore =
# missing whitespace around arithmetic operator
E226,
# line break before/after binary operator
W503,
W504,
# expected 1 blank line, found 0
E301,E302,
To suppress a single warning, it is also handy to add # noqa: F841-like (change the code to the actual code you want to use) comment string to suppress it.
Ref: https://jdhao.github.io/2020/11/05/pyls_flake8_setup/#config-location
I am a bit confused on how to save ipython alias so that everytime i open a ipython session(after saving alias firstly ) and use the alias command directly(at the point,you should not input the alias again ).
For example,when use ipython in linux(or windows) ,i would use vi rather than !vi a file .
vi fileneme
!vi filename
To generate the default configuration files ipython_config.py in your IPython directory under profile_default :
$ ipython profile create
Find ipython_config.py in linux/windows
#use find command in linux
find / -name ipython_config.py
#in window,you can use all kinds of tools to search .
#in commands line,you can use
ipython locate profile.
#in the directory,you can get it
Edit the ipython_config.py file to add the fellowing content
c = get_config()
c.TerminalIPythonApp.display_banner = True
c.InteractiveShellApp.log_level = 20
c.InteractiveShellApp.extensions = []
c.InteractiveShellApp.exec_lines = []
c.InteractiveShellApp.exec_files = ['mycode.py']#load Module when open ipython
c.InteractiveShell.autoindent = True
c.InteractiveShell.colors = 'LightBG'#ipython console color
c.InteractiveShell.confirm_exit = False
c.InteractiveShell.editor = 'vim'#you can change your favorite editor
c.InteractiveShell.xmode = 'Context'
c.PrefilterManager.multi_line_specials = True
#you can add your alias in the fellowing list
c.AliasManager.user_aliases = [('vi','vim'),('py','python'),('git','git'),]#i add git ,vim python .i really dislike "!"
Save the file and exit and get it
thx#jack yang
1.
emacs ~/.ipython/profile_default/python_config.py
2.in the end wirte down
c.AliasManager.user_aliases = [('e', 'emacsclient -t')]
3.exit and restart ipython
I have a Plone project which is created by a buildout script and needs a default encoding of utf-8. This is usually done in the sitecustomize.py file of the Python installation. Since there is a virtualenv, I'd like to generate this file automatically, to contain something like:
import sys
sys.setdefaultencoding('utf-8')
After generation I have two empty sitecustomize.py files - one in parts/instance/, and one in parts/buildout; but none of these seems to be used (I couldn't find them in sys.path).
I tried zopepy:
>>> from os.path import join, isfile
>>> from pprint import pprint
>>> import sys
>>> pprint([p for p in sys.path
... if isfile(join(p, 'sitecustomize.py'))
... ])
and found another one in my local lib/python2.7/site-packages/ directory which looks good; but it doesn't work:
>>> import sys
>>> sys.getdefaultencoding()
'ascii'
This directory sits near the end of the sys.path, because I needed to add it by an extra-paths entry (to get another historical package).
Any pointers? Thank you!
System information: CentOS 7, Python 2.7.5
Edit:
I deleted them two empty sitecustomize.py files; now I have a default encoding of utf-8 in the zopepy session but still ascii in Plone; this surprises me, because I have in my buildout script:
[zopepy]
recipe=zc.recipe.egg
eggs = ${instance:eggs}
extra-paths = ${instance:extra-paths}
interpreter = zopepy
scripts = zopepy
To debug this, I created a little function which I added to my code, and which displays a little information about relevant modules in the sys.path:
import sys
from os.path import join, isdir, isfile
def sitecustomize_info():
plen = len(sys.path)
print '-' * 79
print 'sys.path has %(plen)d entries' % locals()
for tup in zip(range(1, plen+1), sys.path):
nr, dname = tup
if isdir(dname):
for fname in ('site.py', 'sitecustomize.py'):
if isfile(join(dname, fname)):
print '%(nr)4d. %(dname)s/%(fname)s' % locals()
spname = join(dname, 'site-packages', 'sitecustomize.py')
if isfile(spname):
print '%(nr)4d. %(spname)s' % locals()
else:
print '? %(dname)s is not a directory' % locals()
print '-' * 79
Output:
sys.path has 303 entries
8. /usr/lib64/python2.7/site-packages/sitecustomize.py
295. /opt/zope/instances/wnzkb/lib/python2.7/site-packages/sitecustomize.py
? /usr/lib64/python27.zip is not a directory
298. /usr/lib64/python2.7/site.py
? /usr/lib64/python2.7/lib-tk is not a directory
? /usr/lib64/python2.7/lib-old is not a directory
303. /usr/lib/python2.7/site-packages/sitecustomize.py
All sitecustomize.py files look the same (switching to utf-8), and I didn't tweak site.py (for now; if everything else fails, I might need to.)
If you really want/need to use the sitecustomize.py trick, you could include this part in your buildout:
[fixencode]
recipe = plone.recipe.command
stop-on-error = yes
update-command = ${fixencode:command}
command =
SITE_PACKAGES=$(${buildout:executable} -c \
'from distutils.sysconfig import get_python_lib;print(get_python_lib())')
cat > $SITE_PACKAGES/../sitecustomize.py << EOF
#!${buildout:executable} -S
import sys
sys.setdefaultencoding('utf-8')
EOF
It will be added into the site-packages folder from your virtualenv.
It looks like sitecustomize.py is not found anymore unless placed in the global lib directories (Discussion "deleting setdefaultencoding in site.py is evil" (2009), Tracker ticket "sitecustomize.py not found") - and this was made on purpose (!).
This is to prevent users from overriding the default encoding which might have been adjusted by some library. Libraries shouldn't do that, however.
Thus, whoever needs to set the default encoding is seduced to do it globally, which looks like a very silly idea to me. I'd consider it much more reasonable to set this in my virtual environment.
Unfortunately the sitecustomize.py modules in a virtualenv seem to be silently ignored; but it is possible to edit the local site.py. Here is a little sed script:
# vv-------1------vv vv---2---vv vv--------3------vv vv-----------4------------vv
sed --in-place -e 's,^\( encoding =\) \("ascii"\) \(# Default value\) \(set by _PyUnicode_Init()\),\1 "utf-8" \3 \2 \4,' lib/python2.7/site.py
I would like to work with several ipython notebooks at once sharing the same namespace. Is there currently (ipython-1.1.0) a way to do this?
I tried creating different notebooks on the same ipython kernel, but the notebooks don't share a namespace. Also, I've been able to use a terminal console alongside a notebook on the same namespace using the answers in Using IPython console along side IPython notebook, but I couldn't find the notebook equivalent of the --existing argument.
Thanks a lot
Unfortunately this no longer works, you get error message ipython.kernel replaced by ipython.parallel.
A less elegant way than above to alter this is to change IPython/frontend/html/notebook/kernelmanager.py around line 273 from
kernel_id = self.kernel_for_notebook(notebook_id)
to
kernel_id = None
for notebook_id in self._notebook_mapping:
kernel_id = self._notebook_mapping[notebook_id]
break
For Anaconda python, replace start_kernel in kernelmanager.py with
def start_kernel(self, kernel_id=None, path=None, **kwargs):
global saved_kernel_id
if saved_kernel_id:
return saved_kernel_id
if kernel_id is None:
kwargs['extra_arguments'] = self.kernel_argv
if path is not None:
kwargs['cwd'] = self.cwd_for_path(path)
kernel_id = super(MappingKernelManager, self).start_kernel(**kwargs)
self.log.info("Kernel started: %s" % kernel_id)
self.log.debug("Kernel args: %r" % kwargs)
self.add_restart_callback(kernel_id,
lambda : self._handle_kernel_died(kernel_id),
'dead',
)
else:
self._check_kernel_id(kernel_id)
self.log.info("Using existing kernel: %s" % kernel_id)
saved_kernel_id = kernel_id
return kernel_id
and add
saved_kernel_id = None
above
class MappingKernelManager(MultiKernelManager):
True IPython gurus, please supply the correct fix. A lot of people using notebooks want the ability to share the kernel, it's natural, because one notebook quickly grows too big to work with a single complex application, so it is easier to be able to break down the application into multiple notebooks.
Also, gurus, while you're listening, it would be nice to have a collapse-expand feature as in Mathematica so you can only view the part of the notebook you care about and you can zoom out the rest.
The IPython Notebook does not have the equivalent of --existing. Notebooks do not share kernels. It is not a limitation of the notebook itself, it is just a design decision made in the notebook server code. The server code can be modified, for instance, to have all notebooks share the same kernel. You can do this with a little monkeypatching in your IPython configuration. Start by creating a profile:
$ ipython profile create singlekernel
[ProfileCreate] Generating default config file: u'~/.ipython/profile_singlekernel/ipython_config.py'
[ProfileCreate] Generating default config file: u'~/.ipython/profile_singlekernel/ipython_qtconsole_config.py'
[ProfileCreate] Generating default config file: u'~/.ipython/profile_singlekernel/ipython_notebook_config.py'
[ProfileCreate] Generating default config file: u'~/.ipython/profile_singlekernel/ipython_nbconvert_config.py'
and edit $(ipython locate profile singlekernel)/ipython_notebook_config.py to contain:
# Configuration file for ipython-notebook.
c = get_config()
import os
import uuid
from IPython.kernel.multikernelmanager import MultiKernelManager
def start_kernel(self, **kwargs):
"""Minimal override of MKM.start_kernel that always returns the same kernel"""
kernel_id = kwargs.pop('kernel_id', str(uuid.uuid4()))
if self.km is None:
self.km = self.kernel_manager_factory(connection_file=os.path.join(
self.connection_dir, "kernel-%s.json" % kernel_id),
parent=self, autorestart=True, log=self.log
)
if not self.km.is_alive():
self.log.info("starting single kernel")
self.km.start_kernel(**kwargs)
else:
self.log.info("reusing existing kernel")
self._kernels[kernel_id] = self.km
return kernel_id
MultiKernelManager.km = None
MultiKernelManager.start_kernel = start_kernel
This just overrides the kernel starting mechanism to start only one kernel and return it at every subsequent request,
rather than starting a new one for each kernel ID.
Now whenever you start the notebook server with
ipython notebook --profile singlekernel
all of the notebooks in that session will share the same kernel.
Where is ipython's configuration file, which starts with c = get_config(), executed? I'm asking because I want to understand what order things are done in ipython, e.g. why certain commands will not work if included as c.InteractiveShellApp.exec_lines.
This is related to my other question, Log IPython output?, because I want access to a logger attribute, but I can't figure out how to access it in the configuration file, and by the time exec_lines are run, the logger has already started (it's too late).
EDIT: I've accepted a solution based on using a startup file in ipython0.12+. Here is my implementation of that solution:
from time import strftime
import os.path
ip = get_ipython()
#ldir = ip.profile_dir.log_dir
ldir = os.getcwd()
fname = 'ipython_log_' + strftime('%Y-%m-%d') + ".py"
filename = os.path.join(ldir, fname)
notnew = os.path.exists(filename)
try:
ip.magic('logstart -o %s append' % filename)
if notnew:
ip.logger.log_write( u"########################################################\n" )
else:
ip.logger.log_write( u"#!/usr/bin/env python\n" )
ip.logger.log_write( u"# " + fname + "\n" )
ip.logger.log_write( u"# IPython automatic logging file\n" )
ip.logger.log_write( u"# " + '# Started Logging At: '+ strftime('%Y-%m-%d %H:%M:%S\n') )
ip.logger.log_write( u"########################################################\n" )
print " Logging to "+filename
except RuntimeError:
print " Already logging to "+ip.logger.logfname
There are only two subtle differences from the proposed solution linked:
1. saves log to cwd instead of some log directory (though I like that more...)
2. ip.magic_logstart doesn't seem to exist, instead one should use ip.magic('logstart')
The config system sets up a special namespace containing the get_config() function, runs the config file, and collects the values to apply them to the objects as they're created. Referring to your previous question, it doesn't look like there's a configuration value for logging output. You may want to start logging yourself after config, when you can control it more precisely. See this example of starting logging automatically.
Your other question mentions that you're limited to 0.10.2 on one system: that has a completely different config system that won't even look at the same file.