I am a bit confused on how to save ipython alias so that everytime i open a ipython session(after saving alias firstly ) and use the alias command directly(at the point,you should not input the alias again ).
For example,when use ipython in linux(or windows) ,i would use vi rather than !vi a file .
vi fileneme
!vi filename
To generate the default configuration files ipython_config.py in your IPython directory under profile_default :
$ ipython profile create
Find ipython_config.py in linux/windows
#use find command in linux
find / -name ipython_config.py
#in window,you can use all kinds of tools to search .
#in commands line,you can use
ipython locate profile.
#in the directory,you can get it
Edit the ipython_config.py file to add the fellowing content
c = get_config()
c.TerminalIPythonApp.display_banner = True
c.InteractiveShellApp.log_level = 20
c.InteractiveShellApp.extensions = []
c.InteractiveShellApp.exec_lines = []
c.InteractiveShellApp.exec_files = ['mycode.py']#load Module when open ipython
c.InteractiveShell.autoindent = True
c.InteractiveShell.colors = 'LightBG'#ipython console color
c.InteractiveShell.confirm_exit = False
c.InteractiveShell.editor = 'vim'#you can change your favorite editor
c.InteractiveShell.xmode = 'Context'
c.PrefilterManager.multi_line_specials = True
#you can add your alias in the fellowing list
c.AliasManager.user_aliases = [('vi','vim'),('py','python'),('git','git'),]#i add git ,vim python .i really dislike "!"
Save the file and exit and get it
thx#jack yang
1.
emacs ~/.ipython/profile_default/python_config.py
2.in the end wirte down
c.AliasManager.user_aliases = [('e', 'emacsclient -t')]
3.exit and restart ipython
Related
I'm using neovim for coding C++. I have some trouble when coding with it.
I often use file to read and write data. When I run C++ file by command g++ -o data data.cpp && ./data, output file don't reload in neovim.
I try use
set autoread
au CursorHold * checktime
but it doesn't work.
I don't want to type :e to reload everytime I run code
Additionally, I want to know how to set auto reload nerbtreee when I create new file or folder on Explorer.
Environment:
nvim: 0.4.3
Ubuntu 18.04 LTS
I solved it.
Thanks for spacetime-continuum on Reddit
This is how I configure for this problem:
" trigger `autoread` when files changes on disk
set autoread
autocmd FocusGained,BufEnter,CursorHold,CursorHoldI * if mode() != 'c' | checktime | endif
" notification after file change
autocmd FileChangedShellPost *
\ echohl WarningMsg | echo "File changed on disk. Buffer reloaded." | echohl None
Here is how I auto-read buffers using lua Config
vim.o.autoread = true
vim.api.nvim_create_autocmd({ "BufEnter", "CursorHold", "CursorHoldI", "FocusGained" }, {
command = "if mode() != 'c' | checktime | endif",
pattern = { "*" },
})
How can i add configurations to start a kernel via jupyter_client.manager.start_new_kernel() without setting up a default configuration file in .ipython directory? I want to set shell colors to 'NoColor' without setting up a config file and initialize specific formatters.
This is equivalent to the following config file:
c = get_config()
c.InteractiveShell.colors = 'NoColor'
This worked: manager.start_new_kernel(extra_arguments=["--colors='NoColor'"])
The IPYTHON documentation implies there is a way to modify the config file to include an additional path for templates.
Please advise. I have a template file which I want to use, with extension *.tpl which I do not want to have to move around to the local directory of where I do my work.
Any tips? I've searched everywhere and can't find this. It seems to only search the local directory where I am running the ipython nbconvert test.ipynb --to slides --template output_toggle_html.
Thanks.
In the ipython_nbconvert_config.py file located , you can enter the line c.TemplateExporter.template_path = ['.'] which will do the same as the default behavior, however you can add to this list. For example, the code below adds $IPYTHONDIR/nbextensions/templates and will search for *.tpl files in those locations, in the order in which they are provided in the list.
from os import environ
IPYTHONDIR = environ["IPYTHONDIR"]
template_rel_path = '/nbextensions/templates'
template_path = IPYTHONDIR + template_rel_path
c.TemplateExporter.template_path = [
'.',
template_path
]
I would like to work with several ipython notebooks at once sharing the same namespace. Is there currently (ipython-1.1.0) a way to do this?
I tried creating different notebooks on the same ipython kernel, but the notebooks don't share a namespace. Also, I've been able to use a terminal console alongside a notebook on the same namespace using the answers in Using IPython console along side IPython notebook, but I couldn't find the notebook equivalent of the --existing argument.
Thanks a lot
Unfortunately this no longer works, you get error message ipython.kernel replaced by ipython.parallel.
A less elegant way than above to alter this is to change IPython/frontend/html/notebook/kernelmanager.py around line 273 from
kernel_id = self.kernel_for_notebook(notebook_id)
to
kernel_id = None
for notebook_id in self._notebook_mapping:
kernel_id = self._notebook_mapping[notebook_id]
break
For Anaconda python, replace start_kernel in kernelmanager.py with
def start_kernel(self, kernel_id=None, path=None, **kwargs):
global saved_kernel_id
if saved_kernel_id:
return saved_kernel_id
if kernel_id is None:
kwargs['extra_arguments'] = self.kernel_argv
if path is not None:
kwargs['cwd'] = self.cwd_for_path(path)
kernel_id = super(MappingKernelManager, self).start_kernel(**kwargs)
self.log.info("Kernel started: %s" % kernel_id)
self.log.debug("Kernel args: %r" % kwargs)
self.add_restart_callback(kernel_id,
lambda : self._handle_kernel_died(kernel_id),
'dead',
)
else:
self._check_kernel_id(kernel_id)
self.log.info("Using existing kernel: %s" % kernel_id)
saved_kernel_id = kernel_id
return kernel_id
and add
saved_kernel_id = None
above
class MappingKernelManager(MultiKernelManager):
True IPython gurus, please supply the correct fix. A lot of people using notebooks want the ability to share the kernel, it's natural, because one notebook quickly grows too big to work with a single complex application, so it is easier to be able to break down the application into multiple notebooks.
Also, gurus, while you're listening, it would be nice to have a collapse-expand feature as in Mathematica so you can only view the part of the notebook you care about and you can zoom out the rest.
The IPython Notebook does not have the equivalent of --existing. Notebooks do not share kernels. It is not a limitation of the notebook itself, it is just a design decision made in the notebook server code. The server code can be modified, for instance, to have all notebooks share the same kernel. You can do this with a little monkeypatching in your IPython configuration. Start by creating a profile:
$ ipython profile create singlekernel
[ProfileCreate] Generating default config file: u'~/.ipython/profile_singlekernel/ipython_config.py'
[ProfileCreate] Generating default config file: u'~/.ipython/profile_singlekernel/ipython_qtconsole_config.py'
[ProfileCreate] Generating default config file: u'~/.ipython/profile_singlekernel/ipython_notebook_config.py'
[ProfileCreate] Generating default config file: u'~/.ipython/profile_singlekernel/ipython_nbconvert_config.py'
and edit $(ipython locate profile singlekernel)/ipython_notebook_config.py to contain:
# Configuration file for ipython-notebook.
c = get_config()
import os
import uuid
from IPython.kernel.multikernelmanager import MultiKernelManager
def start_kernel(self, **kwargs):
"""Minimal override of MKM.start_kernel that always returns the same kernel"""
kernel_id = kwargs.pop('kernel_id', str(uuid.uuid4()))
if self.km is None:
self.km = self.kernel_manager_factory(connection_file=os.path.join(
self.connection_dir, "kernel-%s.json" % kernel_id),
parent=self, autorestart=True, log=self.log
)
if not self.km.is_alive():
self.log.info("starting single kernel")
self.km.start_kernel(**kwargs)
else:
self.log.info("reusing existing kernel")
self._kernels[kernel_id] = self.km
return kernel_id
MultiKernelManager.km = None
MultiKernelManager.start_kernel = start_kernel
This just overrides the kernel starting mechanism to start only one kernel and return it at every subsequent request,
rather than starting a new one for each kernel ID.
Now whenever you start the notebook server with
ipython notebook --profile singlekernel
all of the notebooks in that session will share the same kernel.
Is there any way to make IPython's logging capability include output as well as input?
This is what a log file looks like currently:
#!/usr/bin/env python
# 2012-08-06.py
# IPython automatic logging file
# 12:02
# =================================
print "test"
I'd like to have one more line show up:
#!/usr/bin/env python
# 2012-08-06.py
# IPython automatic logging file
# 12:02
# =================================
print "test"
# test
(the # is because I assume that is needed to prevent breaking IPython's logplay feature)
I suppose this is possible using IPython notebooks, but on at least one machine I need this for, I'm limited to ipython 0.10.2.
EDIT: I'd like to know how to set this up automatically, i.e. within the configuration file. Right now my config looks like
from time import strftime
import os
logfilename = strftime('ipython_log_%Y-%m-%d')+".py"
logfilepath = "%s/%s" % (os.getcwd(),logfilename)
file_handle = open(logfilepath,'a')
file_handle.write('########################################################\n')
out_str = '# Started Logging At: '+ strftime('%Y-%m-%d %H:%M:%S\n')
file_handle.write(out_str)
file_handle.write('########################################################\n')
file_handle.close()
c.TerminalInteractiveShell.logappend = logfilepath
c.TerminalInteractiveShell.logstart = True
but specifying c.TerminalInteractiveShell.log_output = True seems to have no affect
There's the -o option for %logstart:
-o: log also IPython's output. In this mode, all commands which
generate an Out[NN] prompt are recorded to the logfile, right after
their corresponding input line. The output lines are always
prepended with a '#[Out]# ' marker, so that the log remains valid
Python code.
ADDENDUM: If you are in an interactive ipython session for which logging has already been started, you must first stop logging and then restart:
In [1]: %logstop
In [2]: %logstart -o
Activating auto-logging. Current session state plus future input saved.
Filename : ./ipython.py
Mode : backup
Output logging : True
Raw input log : False
Timestamping : False
State : active
Observe that, after the restart, "Output Logging" is now "True".