How do I resolve this pathlib _file_ name error in Jupyter? - jupyter

---> 84 filepath = Path(__file__) # this not part of course, got tip from friend
85 data = {
86 "W": Path(f"{filepath.parent}/washington.csv"),
87 "C": Path(f"{filepath.parent}/chicago.csv"),
88 "N": Path(f"{filepath.parent}/new_york_city.csv"),
89 }
91 df = pd.read_csv(data[city].as_posix())
NameError: name '__file__' is not defined
I get this name error in Jupyter but it works fine in VSCode and PyCharm. The files are in the same directory as the script file.
How do I resolve it?
Googled pathlib error in Jupyter and come across a stack about that there is another way than pathlib. eg os.path. Anyone knows how to use this to solve my issue?

I found the answer to my own question from user "peng" in this stack:
path problem : NameError: name '__file__' is not defined
adding quotes " around _file solved the error
(__ file__) --> ("__ file__")

Related

Issue getting Github code to work on my own Jupyter Notebook

I am new to Python and would like your help please.
I am copying the NorthernBobwhiteCNN code from Github to try to use the program on my computer: https://github.com/GAMELab-UGA/NorthernBobwhiteCNN. I cloned the Github files as my own Jupyter Notebook that I launched from the Command Prompt.
However, when I try to run the cells in model_prediction_example.ipynb after the import statement cell, I receive multiple errors for all the cells and the code won't run, even though everything is the exact same from Github.
Here are the errors I get using Load Trained Model cell:
RuntimeError Traceback (most recent call last)
Input In [2], in <cell line: 8>()
4 model = net.Net(params).cuda() if torch.cuda.is_available() else net.Net(params)
7 restore_path = os.path.join(model_dir, 'pretrained.pth.tar')
----> 8 _ = utils.load_checkpoint(restore_path, model, optimizer=None)
File ~\NorthernBobwhiteCNN\PythonCode\utils.py:136, in load_checkpoint(checkpoint, model, optimizer)
134 if not os.path.exists(checkpoint):
135 raise("File doesn't exist {}".format(checkpoint))
--> 136 checkpoint = torch.load(checkpoint)
137 model.load_state_dict(checkpoint['state_dict'])
139 if optimizer:
File ~\anaconda3\lib\site-packages\torch\serialization.py:789, in load(f, map_location, pickle_module, weights_only, **pickle_load_args)
787 except RuntimeError as e:
788 raise pickle.UnpicklingError(UNSAFE_MESSAGE + str(e)) from None
--> 789 return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
790 if weights_only:
791 try:
File ~\anaconda3\lib\site-packages\torch\serialization.py:1131, in _load(zip_file, map_location, pickle_module, pickle_file, **pickle_load_args)
1129 unpickler = UnpicklerWrapper(data_file, **pickle_load_args)
1130 unpickler.persistent_load = persistent_load
-> 1131 result = unpickler.load()
1133 torch._utils._validate_loaded_sparse_tensors()
1135 return result
File ~\anaconda3\lib\site-packages\torch\serialization.py:1101, in _load.<locals>.persistent_load(saved_id)
1099 if key not in loaded_storages:
1100 nbytes = numel * torch._utils._element_size(dtype)
-> 1101 load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))
1103 return loaded_storages[key]
File ~\anaconda3\lib\site-packages\torch\serialization.py:1083, in _load.<locals>.load_tensor(dtype, numel, key, location)
1079 storage = zip_file.get_storage_from_record(name, numel, torch.UntypedStorage).storage().untyped()
1080 # TODO: Once we decide to break serialization FC, we can
1081 # stop wrapping with TypedStorage
1082 loaded_storages[key] = torch.storage.TypedStorage(
-> 1083 wrap_storage=restore_location(storage, location),
1084 dtype=dtype)
File ~\anaconda3\lib\site-packages\torch\serialization.py:215, in default_restore_location(storage, location)
213 def default_restore_location(storage, location):
214 for _, _, fn in _package_registry:
--> 215 result = fn(storage, location)
216 if result is not None:
217 return result
File ~\anaconda3\lib\site-packages\torch\serialization.py:182, in _cuda_deserialize(obj, location)
180 def _cuda_deserialize(obj, location):
181 if location.startswith('cuda'):
--> 182 device = validate_cuda_device(location)
183 if getattr(obj, "_torch_load_uninitialized", False):
184 with torch.cuda.device(device):
File ~\anaconda3\lib\site-packages\torch\serialization.py:166, in validate_cuda_device(location)
163 device = torch.cuda._utils._get_device_index(location, True)
165 if not torch.cuda.is_available():
--> 166 raise RuntimeError('Attempting to deserialize object on a CUDA '
167 'device but torch.cuda.is_available() is False. '
168 'If you are running on a CPU-only machine, '
169 'please use torch.load with map_location=torch.device(\'cpu\') '
170 'to map your storages to the CPU.')
171 device_count = torch.cuda.device_count()
172 if device >= device_count:
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
I think that the errors are due to incorrect or missing library installations on my virtual environment.
First, I created the virtual environment "bobwhite" using conda create bobwhite in my command prompt.
Then, I did multiple conda installations based on the import statements in the model_prediction_example.ipynb.
from matplotlib import pyplot as plt
import librosa
import numpy as np
import os
from scipy import ndimage as ndi
from skimage.feature import peak_local_max
import torch
import utils
import model.net as net
So, I did the following installs in my command prompt:
conda install matplotlib
conda install -c conda-forge librosa
conda install numpy
conda install scipy
conda install scikit-image
conda install pytorch torchvision torchaudio cpuonly -c pytorch
conda install pip
pip install utils
However, I am not sure that I have installed the correct libraries to get the notebook to run. How would I find out which libraries are needed based on the import statements? Would I also need to import the libraries used in the python code net.py and utils.py as well? Additionally, I do not understand the import model.net as net statement. Is this referencing the net.py python script also found on the Github? If so, would I need to use a conda install for that, and how would I do it?
Change this line to
map_location = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
checkpoint = torch.load(checkpoint, map_location=map_location)
The existing code is trying to load to cuda even if it is not available. This will make sure it loads to CPU if cuda is not available.
This part of your error's stacktrace points out the issue
File ~\NorthernBobwhiteCNN\PythonCode\utils.py:136, in load_checkpoint(checkpoint, model, optimizer)
134 if not os.path.exists(checkpoint):
135 raise("File doesn't exist {}".format(checkpoint))
--> 136 checkpoint = torch.load(checkpoint)
137 model.load_state_dict(checkpoint['state_dict'])
139 if optimizer:

ValueError: can't read cfg files (sense2vec, reddit vectors)

I am relatively new to NLP and mostly use Jupyter, please let me know what I'm doing wrong:
I followed all the instructions provided here:
https://github.com/explosion/sense2vec
but when I try to use the reddit_vectors as described here:
s2v = Sense2VecComponent(nlp.vocab).from_disk("/path/to/s2v_reddit_2015_md")
I get a ValueError as shown below:
ValueError Traceback (most recent call last)
<ipython-input-36-0d396d0145de> in <module>
----> 1 s2v=Sense2Vec().from_disk('reddit_vectors-1.1.0/vectors.bin/')
~/.conda/envs/NewEnv6/lib/python3.7/site-packages/sense2vec/sense2vec.py in from_disk(self, path,
exclude)
343 cache_path = path / "cache"
344 self.vectors = Vectors().from_disk(path)
--> 345 self.cfg.update(srsly.read_json(path / "cfg"))
346 if freqs_path.exists():
347 self.freqs = dict(srsly.read_json(freqs_path))
~/.conda/envs/NewEnv6/lib/python3.7/site-packages/srsly/_json_api.py in read_json(location)
48 data = sys.stdin.read()
49 return ujson.loads(data)
---> 50 file_path = force_path(location)
51 with file_path.open("r", encoding="utf8") as f:
52 return ujson.load(f)
~/.conda/envs/NewEnv6/lib/python3.7/site-packages/srsly/util.py in force_path(location,
require_exists)
19 location = Path(location)
20 if require_exists and not location.exists():
---> 21 raise ValueError("Can't read file: {}".format(location))
22 return location
23
ValueError: Can't read file: reddit_vectors-1.1.0/vectors.bin/cfg
*I installed all the appropriate versions of libraries/packages required in the requirements.txt
This is what worked for me:
If you are not using a virtual environment check if all the libraries from libraries/packages required in the requirements.txt actually got installed properly, in my case one of them was not properly installed.
The path should lead to the folder containing the cfg file. (After the last update I recommend using the entire computer path instead of navigating inside the same project)
Check your path to reddit_vectors-x.x.x folder.
Put your reddit_vectors-x.x.x folder in the same folder where your .py file is.
Use pathlib to be sure your path is correct.
from pathlib import Path
path = Path(__file__).parent.joinpath('reddit_vectors-1.1.0')
s2v.from_disk(path)
If you still get the error, delete your reddit_vectors-x.x.x folder and re untar/unzip the original reddit_vectors-x.x.x.tar.gz or reddit_vectors-x.x.x.zip

Message: Unable to locate the model you have specified: Ion_auth_model Using "Ion Auth"

After Renaming "model" folder then I revert back folder name to "model" and facing this issue:
An uncaught Exception was encountered
Type: RuntimeException
Message: Unable to locate the model you have specified: Ion_auth_model
Filename: C:\xampp\htdocs\paper_auth\system\core\Loader.php
Line Number: 348
Backtrace:
File: C:\xampp\htdocs\paper_auth\application\libraries\Ion_auth.php
Line: 74
Function: model
File: C:\xampp\htdocs\paper_auth\application\controllers\Book_Class.php
Line: 7
Function: __construct
File: C:\xampp\htdocs\paper_auth\index.php
Line: 315
Function: require_once
my code was working properly before renaming I did it by mistake using "PHPStorm" can any one please help me?
Maybe it's a typo in your question, but if not the folder name is "models" not "model".

pyglet "Unable to share contexts" exception when running PsychoPy demo twice

PsychoPy looks like just what I need. But I want to use my own development environment (a straightforward IPython prompt combined with the editor of my choice) instead of the provided IDE.
The trouble is that you seem to have to quit Python and relaunch after every PsychoPy run. If for example I cd to the ...../demos/coder/stimuli directory and type run gabor.py it runs fine, but if I then type run gabor.py again I get this exception from pyglet:
C:\snap\PsychoPy2\lib\site-packages\pyglet\window\win32\__init__.pyc in _create(self)
259 if not self._wgl_context:
260 self.canvas = Win32Canvas(self.display, self._view_hwnd, self._dc)
--> 261 self.context.attach(self.canvas)
262 self._wgl_context = self.context._context
263
C:\snap\PsychoPy2\lib\site-packages\pyglet\gl\win32.pyc in attach(self, canvas)
261 self._context = wglext_arb.wglCreateContextAttribsARB(canvas.hdc,
262 share, attribs)
--> 263 super(Win32ARBContext, self).attach(canvas)
C:\snap\PsychoPy2\lib\site-packages\pyglet\gl\win32.pyc in attach(self, canvas)
206 raise RuntimeError('Share context has no canvas.')
207 if not wgl.wglShareLists(share._context, self._context):
--> 208 raise gl.ContextException('Unable to share contexts')
209
210 def set_current(self):
ContextException: Unable to share contexts
Is there some sort of pyglet.cleanup() I can call (analogous to pygame.quit()) to allow PsychoPy scripts to run more than once in the same session? Or other way of avoiding this problem?
I'm using the Standalone PsychoPy distro version 1.81.02, untouched. The problem is not specific to IPython---it also can also be demonstrated from the plain Python prompt if you disable sys.exit and type execfile('gabor.py') twice:
C:\snap\PsychoPy2\Lib\site-packages\PsychoPy-1.81.02-py2.7.egg\psychopy\demos\coder\stimuli>python
Python 2.7.3 (default, Apr 10 2012, 23:31:26) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>>
>>> import sys; sys.exit = lambda x:x
>>> execfile('gabor.py')
0.6560 WARNING Movie2 stim could not be imported and won't be available
1.6719 WARNING Monitor specification not found. Creating a temporary one...
>>>
>>> execfile('gabor.py')
Traceback (most recent call last):
[snip]
File "C:\snap\PsychoPy2\lib\site-packages\pyglet\gl\win32.py", line 208, in attach
raise gl.ContextException('Unable to share contexts')
pyglet.gl.ContextException: Unable to share contexts
I don't know how to undo all the pyglet/psychopy initialisation - neither are really designed for you to do this, so there would be some work here. But I'm not sure it's a good idea anyway to run scripts the way you are doing.
The PsychoPy app itself gets around the by launching each script in a new process. It means that you know the namespace is clean on each run. Running your script on top of the previous one can lead to some really hard-to-find bugs because you don't know in what state the previous script left the memory, graphics card and namespace.
cheers
Jon

Cannot run magic functions in ipython terminal

I am using Enthought's Canopy environment on a 64 bit Linux OS. Everything works fine in the Ipython console which is attached with the editor. But when I ipython in the terminal and try to use magic functions, I get the following error.
---------------------------------------------------------------------------
error Traceback (most recent call last)
<ipython-input-3-29a4050aa687> in <module>()
----> 1 get_ipython().show_usage()
/home/shahensha/Development/Canopy/appdata/canopy-1.0.3.1262.rh5-x86_64/lib/python2.7/site-packages/IPython/core/interactiveshell.pyc in show_usage(self)
2931 def show_usage(self):
2932 """Show a usage message"""
-> 2933 page.page(IPython.core.usage.interactive_usage)
2934
2935 def extract_input_lines(self, range_str, raw=False):
/home/shahensha/Development/Canopy/appdata/canopy-1.0.3.1262.rh5-x86_64/lib/python2.7/site-packages/IPython/core/page.pyc in page(strng, start, screen_lines, pager_cmd)
188 if screen_lines <= 0:
189 try:
--> 190 screen_lines += _detect_screen_size(screen_lines_def)
191 except (TypeError, UnsupportedOperation):
192 print(str_toprint, file=io.stdout)
/home/shahensha/Development/Canopy/appdata/canopy-1.0.3.1262.rh5-x86_64/lib/python2.7/site-packages/IPython/core/page.pyc in _detect_screen_size(screen_lines_def)
112 # Proceed with curses initialization
113 try:
--> 114 scr = curses.initscr()
115 except AttributeError:
116 # Curses on Solaris may not be complete, so we can't use it there
/home/shahensha/Development/Canopy/appdata/canopy-1.0.3.1262.rh5-x86_64/lib/python2.7/curses/__init__.pyc in initscr()
31 # instead of calling exit() in error cases.
32 setupterm(term=_os.environ.get("TERM", "unknown"),
---> 33 fd=_sys.__stdout__.fileno())
34 stdscr = _curses.initscr()
35 for key, value in _curses.__dict__.items():
error: setupterm: could not find terminfo database
So, I installed a bare bones iPython shell which is not the one provided by Canopy and tried the same magic functions in there and it works fine.
Have I done something wrong with the installation? Please help
Thanks a lot
shahensha
This is not a solution, but just an observation. My desktop is MacOS-X and I connect to a Centos machine to run Enthought Canopy both 64 bit. I get the same error message as OP if I ssh from iterm2, but not if I use the Terminal app.
I am not sure what the underlying reason is, but may be someone can verify if a similar situation is true for linux. Interestingly I can use either iterm2 or Terminal on the local canopy without any issues.
Update:
I just noticed that the TERM environment variable in iterm2 was set to "xterm" while the Terminal app was showing "xterm-256color". Issuing the command export TERM="xterm-256color" before running the Canopy ipython in terminal solves the issue for me in iterm2.
Problem reproduction:
$ python -c 'import curses; curses.setupterm()'
Traceback (most recent call last):
File "<string>", line 1, in <module>
_curses.error: setupterm: could not find terminfo database
This irc log gave me the idea that this error was to do with libncursesw.
My Canopy version is 1.0.3.1262.rh5-x86_64. I have installed it to ~/src/canopy.
In ~/src/canopy/appdata/canopy-1.0.3.1262.rh5-x86_64/lib we can see that my canopy install has libncursesw.so.5.7.
My machine (Debian Wheezy 64bit) has libncursesw.so.5.9 (in /lib/x86_64-linux-gnu/libncursesw.so.5.9). I made canopy use this. You can toggle the problem on / off by using LD_PRELOAD and pointing at the .so file.
Solution
Replace libncurses.so.5.7 with libncurses.so.5.9:
CANOPYDIR=$HOME/src/canopy
CANOPYLIBS=$CANOPYDIR/appdata/canopy-1.0.3.1262.rh5-x86_64/lib/
SYSTEMLIBS=/lib/x86_64-linux-gnu
cp $SYSTEMLIBS/libncurses.so.5.9 $CANOPYLIBS
ln -sf $CANOPYLIBS/libncurses.so.5.9 $CANOPYLIBS/libncurses.so.5
It appears that Canopy User Python is not your default. See this article:
https://support.enthought.com/entries/23646538-Make-Canopy-s-Python-be-your-default-Python-i-e-on-the-PATH-
Update: Not true here -- instead, see batu's workaround answer.