ipython notebook is not updating when I change my code - ipython

So, I ran into a weird issue using an ipython notebook and not sure what to do. Normally, when I run a part of the code, if there is an error, I would trace it back, fix it, and then re-run the code. I was doing a similar thing but even after making changes to the code, it looks like nothing is changing!
Here is the example... I am using Python 3.5 so xrange is gone. This then caused an error to be thrown:
XXXX
24 XXXX
25 XXXX
---> 26 for t in xrange(0,len(data),1):
27
28 XXXX
NameError: name 'xrange' is not defined
but after changing my code (which you can see below the difference in line 26), the same error pops up!
XXXX
24 XXXX
25 XXXX
---> 26 for t in range(0,len(data),1):
27
28 XXX
NameError: name 'xrange' is not defined
Any ideas on why this would be happening?

As Thomas K said, you're probably making a change in an external file that was not imported. There is a very useful command in ipython notebook for such cases, called autoreaload. With autoreaload, whenever you modify an external file you do not have to import it again because the extension takes care of it for you. For more information check: ipython autoreload.

Whenever using external files along with Ipython use autoreload. It will reload the external files every time before executing any code in IPython.
Add this at first cell of the IPython.
%load_ext autoreload
%autoreload 2

For me this was due to one of the following:
Cause 1: imported module not updated
Solution:
import importlib
importlib.reload(your_module)
Cause 2: other
Solution: restart the kernel, for jupyter notebook this is how

I have the same problem. I tried jupyter magic autoreload but it didn't work. Finally, I solved it in this way:
in the first cell, add
import My_Functions as my
import importlib
importlib.reload(my)
But notice if the module is imported in this way:
from My_Functions import *
I couldn't reload it properly.

I have the same issue sometimes. I think it has to do with memory - if I have a bunch of dataframes hanging around it seems to cause issues. If I restart the kernel using the Kernel > Restart option the problem goes away.

I have the same problem sometimes. I restarted the kernels but it didn't work.I try to run the cell (ctr+ enter) two or three times. then the result will be displayed according to the updated codes. I hope it helps.

insert new empty cell with + option, go to Kernel, choose Restart & Run All.
Then, fill in the new inserted cell and run again in Kernel, choose Restart & Run All.
It works with me.

Related

Relative Path (pathlib) name working on MAC OS but on Windows gives me a error

Currently I am working a project that has have been using the pathlib library so I can work on my Windows desktop when I need too and on my MacBook Pro. Essentially be able to work between both operating systems. I have not have any issues at all until right now. Here is the set up:
I have a pipeline set up to automatically save a .joblib and a whole lot of .png files that will go to a directory called
output_dir = Path('../Trained_Models/Differential_gene_analysis/A Kidney Cancer Transcriptome Molecular Signature Identifies Tumors with Tumor Thrombus/Models train on TCGA data and test on Rodriguez data/Oct-XX-20XX')
For example, if I want to save a .joblib file under the name RandomForest_TumorThrombus_104.joblib,I would use the command
joblib.dump(model ,output_dir / 'RandomForest_TumorThrombus_104.joblib')
On my MacBook Pro, I have no issues when this is ran, but on Windows it gives me the following error
FileNotFoundError: [Errno 2] No such file or directory: '..\\Trained_Models\\Differential_gene_analysis\\A Kidney Cancer Transcriptome Molecular Signature Identifies Tumors with Tumor Thrombus\\Models train on TCGA data and test on Rodriguez data\\Oct-17-2022\\RandomForest_TumorThrombus_104.joblib'
I have tried to use the .resolve() method to get the absolute path but still gives me the same error. I have tried to experiment to try to see what is goin on such as using os.path.exists(). When using the os.path.exists() method I get True for the follwoing command:
os.path.exists(output_dir)
So it does indeed recognize that the directory exists. The next thing I tried was to rename the file to something like dddddd.joblib and that worked. But I find that only a few names for the file would allow me to save the files. During debug I found that the most recent Traceback occurs here:
with open(filename, 'wb') as f:```
I was wondering if anyone here had any idea what was going on here and how I can fix this issue? Please and Thank you.
The solution was to enable long paths on Windows.

openfoam ./makeParaview

i am currently building openfoam 1912 from source and have some trouble building paraview. I just build Qt and Cmake but as soon as i type ./makeParaview qt-5.9.9 5.6.3i get the following error:
./makeParaView: 64: local: -DWM_DP: bad variable name
./makeParaView: 64: ./makeParaView: -DOPENFOAM: bad variable name
A similar error occurs when i try to make VTK / Adios2. Any idea where i took the wrong turn?
Greetings
Gabbagandalf
The proper solution is related to shell quoting issues
- flag="$(stripCompilerFlags $flag)"
+ flag="$(stripCompilerFlags "$flag")"
but in the meantime you can simply change the shebang to #!/bin/bash - it is more forgiving.
As discussed and resolved in these GitLab ticket-1 and ticket-2:
The issue seems to be Ubuntu related.
Solution:
Prior to execute ./makeParaview, switch to bash:
Change the first line of makeParaView script to #!/bin/bash
sudo dpkg-reconfigure dash
./makeParaView

zeppelin-0.7.3 Interpreter pyspark not found

I get the below error when I use pyspark via Zeppelin.
The python & spark interpreters work and all environment variables are set correctly.
print os.environ['PYTHONPATH']
/x01/spark_u/spark/python:/x01/spark_u/spark/python/lib/py4j-0.10.4-src.zip:/x01/spark_u/spark/python:/x01/spark_u/spark/python/lib/py4j-0.10.4-src.zip:/x01/spark_u/spark/python/lib/py4j-0.10.4-src.zip:/x01/spark_u/spark/python/lib/pyspark.zip:/x01/spark_u/spark/python:/x01/spark_u/spark/python/pyspark:/x01/spark_u/zeppelin/interpreter/python/py4j-0.9.2/src:/x01/spark_u/zeppelin/interpreter/lib/python
zepplin-env.sh is set with the below vars
export PYSPARK_PYTHON=/usr/local/bin/python2
export PYTHONPATH=${SPARK_HOME}/python:${SPARK_HOME}/python/lib/py4j-0.10.4-src.zip:${PYTHONPATH}
export SPARK_YARN_USER_ENV="PYTHONPATH=${PYTHONPATH}"
See the below log file
INFO [2017-11-01 12:30:42,972] ({pool-2-thread-4}
RemoteInterpreter.java[init]:221) - Create remote interpreter
org.apache.zeppelin.spark.PySparkInterpreter
org.apache.zeppelin.interpreter.InterpreterException:
paragraph_1509038605940_-1717438251's Interpreter pyspark not
found
Thank you in advance
I found a workaround for the above issue.The interpreter not found issue does not happen when I create note inside a directory.The issue only happens when I use notes at toplevel.Addionally I foud out that this issue does not happen in 0.7.2 version
Ex :
enter image description here

How do I reset the IPython input prompt numbering? (console, not notebook or qt)

The following is my current IPython command prompt (regular terminal IPython, not the notebook, or Qt):
In [45]:
I'm done with my workflow, and am moving on to a new task. I'd like a way to reset the IPython input prompt number as follows:
In [1]:
I understand this may be possible from non-duplicate question to which I cannot reply nor answer for my case: How do I reset the IPython input prompt numbering?
For further clarity, an example of the command I'm after:
In [99]: %reset_command_number
In [1]: 2+2
Out[1]: 4
In [2]: %reset_command_number
In [1]:
The 3 Magic Questions:
Does such magic (%reset_command_number) exist?
If the answer to 1 is yes, what dark magic is it?
If the answer to 1 is no, is this achievable from a lower level IPython api? (yes i've searched the api docs and no i did not find a way to do it, yet...)
One possibility would be to modify the execution_count of the InteractiveShell:
import IPython; IPython.get_ipython().execution_count = 0
Using a one line command with ; makes it easier to call this repeatedly (if you use up-arrow to scroll the command history)
This has the side effect that after first command you will also see this error. I do not know if it has any side effects, though. (it already says that the history logging was moved to another session)
ERROR! Session/line number was not unique in database. History logging moved to new session 20741
Demo
In [1]: a = 1
In [2]: b = 88
In [3]: c = 99
In [4]: import IPython; IPython.get_ipython().execution_count = 0
In [1]: a
Out[1]: 1
ERROR! Session/line number was not unique in database. History logging moved to new session 20767
In [2]: b
Out[2]: 88
In [3]: c
Out[3]: 99
In [4]: import IPython; IPython.get_ipython().execution_count = 0
In [1]: a = 15
ERROR! Session/line number was not unique in database. History logging moved to new session 20768
In [2]: a
Out[2]: 15
In [3]: b
Out[3]: 88
Using an alias?
It is also possible to use an alias for the command. For example, create file i_reset.py and put it into your site-packages, ~/.ipython/extensions/, your PYTHONPATH or any other folder you can see in your sys.path
# i_reset.py
def i_reset(line):
"""
Reset IPython shell numbering to [1]
"""
import IPython; IPython.get_ipython().execution_count = 0
def load_ipython_extension(ipython):
"""This function is called when the extension is
loaded. It accepts an IPython InteractiveShell
instance. We can register the magic with the
`register_magic_function` method of the shell
instance."""
ipython.register_magic_function(i_reset, 'line')
Then, you may use the extension by first loading it and then calling it like so:
In [4]: %load_ext i_reset
In [5]: i_reset
In [1]: a
Out[1]: 1
I'm not sure you can, and what is done with the notebook is equivalent (from what I understood) to just open a new python session as if you open a new tab and start ipython again, and re-execute everything.
So, if I were you and had to start a new task and want to start at In [1] I would just start a new ipython environment.
The equivalent with ipython notebook to start a new task would be to start a new notebook.
The only reason I can imagine why you would do this is that you want to access some variable from the previous task.
What I would is to save the object you want to re-use using pickle or by saving a variable into a file an reload it in a new ipython session.
Did you looked at the Doc on Input caching system ? You may have answer to other question relative to yours.
Note that I'm aware that's not really an answer, but it's a bit more than just a comment, anyway, I hope this helps !
If you have more specific needs than just reseting the prompt number, just detail them.
Assuming you don't need the session data, the simplest is to exit and restart IPython.
$ ipython
In [1]: 1+1
Out[1]: 2
In [2]: 2+2
Out[2]: 4
In [3]: 3+3
Out[3]: 6
In [4]: exit
$ ipython
In [1]: print("Prompt numbering reset")
Prompt numbering reset
In [2]:

Pyinstaller --onefile warning pyconfig.h when importing scipy or scipy.signal

This is very simple to recreate.
If my script foo.py is:
import scipy
Then run:
python pyinstaller.py --onefile foo.py
When I launch foo.exe I get:
WARNING: file already exists but should not: C:\Users\username\AppData\Local\Temp\_MEI86402\Include\pyconfig.h
I've tested a few versions but the latest I've confirmed is 2.1dev-e958e02 running on Win7, Python 2.7.5 (32 bit), Scipy version 0.12.0
I've submitted a ticket with the Pyinstaller folks but haven't heard anything yet. Any clues how to debug this further?
You can hack the spec file to remove the second instance by adding these lines after a=Analysis:
for d in a.datas:
if 'pyconfig' in d[0]:
a.datas.remove(d)
break
The answer by wtobia# worked for me. See https://github.com/pyinstaller/pyinstaller/issues/783
Go to C:\Python27\Lib\site-packages\PyInstaller\build.py
Find the def append(self, tpl): function.
Change if tpl[2] == "BINARY": to if tpl[2] in ["BINARY", "DATA"]:
Expanding upon Ilya's solution, I think this is a little bit more robust solution to modifying the spec file (again place after the a=Analysis... statement).
a.datas = list({tuple(map(str.upper, t)) for t in a.datas})
I only tested this on a small test program (one with a single import and print statement), but it seems to work. a.datas is a list of tuples of strings which contain the pyconfig.h paths. I convert them all to lowercase and then dedup. I actually found that converting all of them all to lowercase was sufficient to get it to work, which suggests to me that pyinstaller does case-sensitive deduping when it should be case-insensitive on Windows. However, I did the deduping myself for good measure.
I realized that the problem is that Windows is case-insensitive and these 2 statements are source directories are "duplicates:
include\pyconfig.h
Include\pyconfig.h
My solution is to manually tweak the .spec file with after the a=Analysis() call:
import platform
if platform.system().find("Windows")>= 0:
a.datas = [i for i in a.datas if i[0].find('Include') < 0]
This worked in my 2 tests.
A more flexible solution would be to check ALL items for case-insensitive collisions.
I ran the archive_viewer.py utility (from PyInstaller) on one of my own --onefile executables that has the same error and found that pyconfig.h is included twice:
(31374007, 6521, 21529, 1, 'x', 'include\\pyconfig.h'),
(31380528, 6521, 21529, 1, 'x', 'Include\\pyconfig.h'),
(31387049, 984, 2102, 1, 'x', 'pytz\\zoneinfo\\CET'),
Sadly though, I don't know how to fix it.
PyInstaller Manual link:
http://www.pyinstaller.org/export/d3398dd79b68901ae1edd761f3fe0f4ff19cfb1a/project/doc/Manual.html#archiveviewer