Error converting markdown into pdf in jupyter notebook - jupyter

I can't convert markdown notebook into pdf but am able to convert code cell.
The error I got is :
500 : Internal Server Error
The error was:
nbconvert failed: PDF creating failed
[I 10:32:16.940 NotebookApp] Running pdflatex 3 times: [u'pdflatex', u'notebook.tex']
[C 10:32:17.174 NotebookApp] pdflatex failed: [u'pdflatex', u'notebook.tex']
This is pdfTeX, Version 3.14159265-2.6-1.40.16 (TeX Live 2015) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
(./notebook.tex
LaTeX2e <2015/01/01>
Babel <3.9l> and hyphenation patterns for 79 languages loaded.
tex/latex/amsfonts/umsb.fd)
LaTeX Warning: No \author given.
! Missing $ inserted.
<inserted text>
$
l.232 ...l the conduits, \$ \frac{\Delta P}{\mu L}
\$ are set to be 1.0
?
! Emergency stop.
<inserted text>
$
l.232 ...l the conduits, \$ \frac{\Delta P}{\mu L}
\$ are set to be 1.0
! ==> Fatal error occurred, no output PDF file produced!
Transcript written on notebook.log.
[I 10:32:17.175 NotebookApp] Running bibtex 1 time: [u'bibtex', u'notebook']
[W 10:32:17.262 NotebookApp] bibtex had problems, most likely because there were no citations
[I 10:32:17.262 NotebookApp] Running pdflatex 3 times: [u'pdflatex', u'notebook.tex']
[C 10:32:17.501 NotebookApp] pdflatex failed: [u'pdflatex', u'notebook.tex']
This is pdfTeX, Version 3.14159265-2.6-1.40.16 (TeX Live 2015) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
LaTeX Warning: No \author given.
! Missing $ inserted.
<inserted text>
$
l.232 ...l the conduits, \$ \frac{\Delta P}{\mu L}
\$ are set to be 1.0
?
! Emergency stop.
<inserted text>
$
l.232 ...l the conduits, \$ \frac{\Delta P}{\mu L}
\$ are set to be 1.0
! ==> Fatal error occurred, no output PDF file produced!
Transcript written on notebook.log.
[W 10:32:17.503 NotebookApp] 500 GET /nbconvert/pdf/ComputerProject1.ipynb?download=true (::1): nbconvert failed: PDF creating failed
[E 10:32:17.505 NotebookApp] {
"Accept-Language": "en-US,en;q=0.8",
"Accept-Encoding": "gzip, deflate, sdch",
"Connection": "keep-alive",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.86 Safari/537.36",
"Host": "localhost:8888",
"Referer": "http://localhost:8888/notebooks/ComputerProject1.ipynb",
"Upgrade-Insecure-Requests": "1"
}
[E 10:32:17.505 NotebookApp] 500 GET /nbconvert/pdf/ComputerProject1.ipynb?download=true (::1) 787.74ms referer=http://localhost:8888/notebooks/ComputerProject1.ipynb
I have pandoc, mactex installed. Anyone knows how to solve this?
Thanks,

Related

Selenium webdriver: Unknown error: cannot determine loading status [duplicate]

I'm using InstaPy which use Python and Selenium. I start the script per Cron and from time to time it crashes. So it'r really irregular, sometimes it runs well through. I'v posted on GitHub Repo as well already but didn't get an answer there, so i'm asking here now if someone has an idea why.
It's a digital ocean ubuntu server and i'm using it on headless mode. The driver version are visible on the log. here are error messages:
ERROR [2018-12-10 09:53:54] [user] Error occurred while deleting cookies from web browser!
b'Message: invalid session id\n (Driver info: chromedriver=2.44.609551 (5d576e9a44fe4c5b6a07e568f1ebc753f1214634),platform=Linux 4.15.0-42-generic x86_64)\n'
Traceback (most recent call last):
File "/root/InstaPy/instapy/util.py", line 1410, in smart_run
yield
File "./my_config.py", line 43, in <module>
session.follow_user_followers(['xxxx','xxxx','xxxx','xxxx'], amount=100, randomize=True, interact=True)
File "/root/InstaPy/instapy/instapy.py", line 2907, in follow_user_followers
self.logfolder)
File "/root/InstaPy/instapy/unfollow_util.py", line 883, in get_given_user_followers
channel, jumps, logger, logfolder)
File "/root/InstaPy/instapy/unfollow_util.py", line 722, in get_users_through_dialog
person_list = dialog_username_extractor(buttons)
File "/root/InstaPy/instapy/unfollow_util.py", line 747, in dialog_username_extractor
person_list.append(person.find_element_by_xpath("../../../*")
File "/usr/local/lib/python3.6/dist-packages/selenium/webdriver/remote/webelement.py", line 351, in find_element_by_xpath
return self.find_element(by=By.XPATH, value=xpath)
File "/usr/local/lib/python3.6/dist-packages/selenium/webdriver/remote/webelement.py", line 659, in find_element
{"using": by, "value": value})['value']
File "/usr/local/lib/python3.6/dist-packages/selenium/webdriver/remote/webelement.py", line 633, in _execute
return self._parent.execute(command, params)
File "/usr/local/lib/python3.6/dist-packages/selenium/webdriver/remote/webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "/usr/local/lib/python3.6/dist-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: unknown error: session deleted because of page crash
from unknown error: cannot determine loading status
from tab crashed
(Session info: headless chrome=70.0.3538.110)
(Driver info: chromedriver=2.44.609551 (5d576e9a44fe4c5b6a07e568f1ebc753f1214634),platform=Linux 4.15.0-42-generic x86_64)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/root/InstaPy/instapy/instapy.py", line 3845, in end
self.browser.delete_all_cookies()
File "/usr/local/lib/python3.6/dist-packages/selenium/webdriver/remote/webdriver.py", line 878, in delete_all_cookies
self.execute(Command.DELETE_ALL_COOKIES)
File "/usr/local/lib/python3.6/dist-packages/selenium/webdriver/remote/webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "/usr/local/lib/python3.6/dist-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: chrome not reachable
(Session info: headless chrome=71.0.3578.80)
(Driver info: chromedriver=2.44.609551 (5d576e9a44fe4c5b6a07e568f1ebc753f1214634),platform=Linux 4.15.0-42-generic x86_64)
Any idea what the reason could be and how to solve it?
Thanks for the inputs. And the guys from http://treestones.ch/ helped me out.
Though you see the error as:
Error occurred while deleting cookies from web browser!
b'Message: invalid session id\n (Driver info: chromedriver=2.44.609551 (5d576e9a44fe4c5b6a07e568f1ebc753f1214634),platform=Linux 4.15.0-42-generic x86_64)\n'
The main exception is:
selenium.common.exceptions.WebDriverException: Message: unknown error: session deleted because of page crash
from unknown error: cannot determine loading status
from tab crashed
Your code trials would have given us some clues what going wrong.
Solution
There are diverse solution to this issue. However as per UnknownError: session deleted because of page crash from tab crashed this issue can be solved by either of the following solutions:
Add the following chrome_options:
chrome_options.add_argument('--no-sandbox')
Chrome seem to crash in Docker containers on certain pages due to too small /dev/shm. So you may have to fix the small /dev/shm size.
An example:
sudo mount -t tmpfs -o rw,nosuid,nodev,noexec,relatime,size=512M tmpfs /dev/shm
It also works if you use -v /dev/shm:/dev/shm option to share host /dev/shm
Another way to make it work would be to add the chrome_options as --disable-dev-shm-usage. This will force Chrome to use the /tmp directory instead. This may slow down the execution though since disk will be used instead of memory.
chrome_options.add_argument('--disable-dev-shm-usage')
from tab crashed
from tab crashed was WIP(Work In Progress) with the Chromium Team for quite some time now which relates to Linux attempting to always use /dev/shm for non-executable memory. Here are the references :
Linux: Chrome/Chromium SIGBUS/Aw, Snap! on small /dev/shm
Chrome crashes/fails to load when /dev/shm is too small, and location can't be overridden
As per Comment61#Issue 736452 the fix seems to be have landed with Chrome v65.0.3299.6
Reference
You can find a couple of relevant discussions in:
org.openqa.selenium.SessionNotCreatedException: session not created exception from tab crashed error when executing from Jenkins CI server
In case someone is facing this problem with docker containers:
use the flag --shm-size=2g when creating the container and the error is gone.
This flag make the container to use the host's shared memory.
Example
$ docker run -d --net gridNet2020 --shm-size="2g" -e SE_OPTS="-browser applicationName=zChromeNodePdf30,browserName=chrome,maxInstances=1,version=78.0_debug_pdf" -e HUB_HOST=selenium-hub-3.141.59 -P -p 5700:5555 --name zChromeNodePdf30 -v /var/lib/docker/sharedFolder:/home/seluser/Downloads selenium/node-chrome:3.141.59-xenon
Source: https://github.com/SeleniumHQ/docker-selenium
I was getting the following error on my Ubuntu server:
selenium.common.exceptions.WebDriverException: Message: unknown error:
session deleted because of page crash from tab crashed (Session
info: headless chrome=86.0.4240.111) (Driver info:
chromedriver=2.41.578700
(2f1ed5f9343c13f73144538f15c00b370eda6706),platform=Linux
5.4.0-1029-aws x86_64)
It turned out the the cause of the error was insufficient disk space on the server and the solution was to extend my disk space. You can check this question for more information.
We need to specify the shm memory separatly, --shm-size=2g
In case of docker,
use the following config - this working fine for me
services:
chrome:
image: selenium/node-chrome:4.0.0-rc-1-prerelease-20210823
shm_size: 2gb
Message: unknown error: session deleted because of page crash from unknown error: cannot determine loading status from tab crashed
(Session info: headless chrome=95.0.4638.69)
This error occurred because there was not enough waiting time for web pages to load
The answers above solved my issue, but since i needed to run it from a docker-compose.yml i used this configuration which calls my regular unchanged DockerFile
docker-compose.yml
version: '1.0'
services:
my_app:
build:
context: .
#when building
shm_size: 1gb
#when running
shm_size: 1gb
DockerFile (selenium on Ubuntu -WSL-)
FROM python:3.10
# install google chrome
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list'
RUN apt-get -y update
RUN apt-get install -y google-chrome-stable
# install chromedriver
RUN apt-get install -yqq unzip
RUN wget -O /tmp/chromedriver.zip http://chromedriver.storage.googleapis.com/`curl -sS chromedriver.storage.googleapis.com/LATEST_RELEASE`/chromedriver_linux64.zip
RUN unzip /tmp/chromedriver.zip chromedriver -d /usr/local/bin/
# set display port to avoid crash
ENV DISPLAY=:99
# install selenium
RUN pip install selenium==3.8.0
#install and prepar app
COPY ./requirements.txt ./
# COPY . /app
RUN pip3 install -r requirements.txt
RUN apt-get install -y libnss3
ENV APP_DIR=/app/my_app
RUN mkdir -p ${APP_DIR}
WORKDIR ${APP_DIR}
# COPY . ${APP_DIR} #not needed since we are mapping the volume in docker-compose
CMD [ "my_app.py" ]
ENTRYPOINT [ "python" ]
This happened to me while trying to open a new web page with the same driver in Chromium. It worked fine in my local machine where I use Chrome.
Did not worked:
driver = webdriver.Chrome(options=options)
driver.execute_script("Object.defineProperty(navigator, 'webdriver', {get: () => undefined})")
driver.execute_cdp_cmd('Network.setUserAgentOverride', {
"userAgent": 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.53 Safari/537.36'})
driver.get('url1')
# Do operations with url1
driver.get('url2')
# Do operations with url2 -> did not work and crashed
Below is the solution, I am using which is working for me. i.e re-initializing the driver
def setup_driver():
global driver
driver = webdriver.Chrome(options=options)
driver.maximize_window()
driver.execute_script("Object.defineProperty(navigator, 'webdriver', {get: () => undefined})")
driver.execute_cdp_cmd('Network.setUserAgentOverride', {
"userAgent": 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.53 Safari/537.36'})
setup_driver()
driver.get('url1')
# Do operations with url1
driver.close()
setup_driver()
driver.get('url2')
# Do operations with url2
driver.close()
I'm not sure whether this is the only possible cause and solution, but after thorough investigation of this error which I encountered every now and then, I found the following evidences:
In the log of the Selenium Grid nodes (which you can show by executing the following command on the docker host: sudo docker logs <container-id>) I found many errors reading: [SEVERE]: bind() failed: Cannot assign requested address (99). From what I read, this error usually means that there are no available ports.
When showing the processes running inside a node (sudo docker exec -it bash and then ps aux), I found more than 300 instances of chrome-driver processes (you can count them using ps aux|grep driver|wc -l)
When running locally, I know that the chrome-driver process is normally invoked when you create an instance of ChromeDriver and is terminated when you call driver.Quit() (I work in C#, not Python). Therefore I concluded that some tests don't call drive.Quit().
The conclusion
In my case, I found that even though we had a call to driver.Quit() in the [TearDown] method (we use NUnit), we had some more code before that line, that could throw and exception. When one of these preceding lines threw an exception, the line that calls driver.Quit() is not reached, and therefore over time we were "leaking" chrome-driver processes on the Selenium Grid nodes. These orphan processes caused a resource leak of available ports (and probably also memory), which also caused the browser's page to crash.
The solution
Given the above conclusion, the solution was pretty straight forward. We had to wrap the code that precedes driver.Quit() in a try/finally, and put the call to driver.Quit() in the finally clause, like this:
[TearDown]
public void MyTearDown()
{
try
{
// Perform any tear down code you like, like saving screenshots, page source, etc.
}
finally
{
_driver?.Quit();
}
}
I was having the same problem, I checked the log at which point in my script the bug happened and I added some wait, ie, time.sleep(2) just before the bug, and my problem was fixed.

How to validate metadata.xml against .dtd in gentoo?

I am trying to validate metadata.xml against www.gentoo.org/dtd/metadata.dtd with xmllint from =dev-libs/libxml2-2.9.3 ebuild.
I tried the commands (some from here):
$ xmllint --noout --valid metadata.xml
error : Unknown IO error
metadata.xml:2: warning: failed to load external entity "http://www.gentoo.org/dtd/metadata.dtd"
the same for xmllint metadata.xml --dtdvalid metadata.dtd
and xmllint --loaddtd http://www.gentoo.org/dtd/metadata.dtd
$ xmllint --valid metadata.xml --schema metadata.dtd
metadata.dtd:1: parser error : StartTag: invalid element name
I need xmllint and not mono-xmltool (from C#/CLI) because xmllint is used in repoman -d command. And repoman is used for gentoo overlay validation in travis-ci
How to validate xml with xmllint properly?
UPD:
site returns "HTTP/1.1 301 Moved Permanently" and that is why load fails
part of strace:
recvfrom(3, "HTTP/1.1 301 Moved Permanently\r\n"..., 4096, 0, NULL, NULL) = 446
recvfrom(3, "", 4096, 0, NULL, NULL) = 0
close(3) = 0
write(2, "error : ", 8error : ) = 8
write(2, "Unknown IO error\n", 17Unknown IO error
probably libxml2 doesn't do https
USE="icu ipv6 python readline -debug -examples -lzma -static-libs {-test}"
libxml2 uses nanoHTTP, nanoHTTP can work with HTTPS
Your assumption was right, the problem is HTTPS. To work around this and to save some BW and time, repoman validates against a local file, which it prefetches if not found. The default location is either REPO_ROOT/metadata/dtd/metadata.dtd or DISTDIR/metadata.dtd. To get the exact arguments repoman uses for xmllint you have to have a look at its source code - here. As you can see, it's
xmllint --nonet --noout --dtdvalid <metadata.dtd> metadata.xml
This command still outputs:
metadata.xml:2: warning: failed to load external entity "https://www.gentoo.org/dtd/metadata.dtd"
<!DOCTYPE pkgmetadata SYSTEM "https://www.gentoo.org/dtd/metadata.dtd">
or in case of HTTP:
I/O error : Attempt to load network entity http://www.gentoo.org/dtd/metadata.dtd
metadata.xml:2: warning: failed to load external entity "http://www.gentoo.org/dtd/metadata.dtd"
<!DOCTYPE pkgmetadata SYSTEM "http://www.gentoo.org/dtd/metadata.dtd">
But only as a warning, so the command exits with 0.

Ipython Notebook Running only as root

I was trying to import a notebook in iPython(Jupyter after updating). But for some reason, I am able to import any notebook only if I run as root user. Otherwise, I get the following error for all notebooks.
An unknown error occurred while loading this notebook. This version
can load notebook formats v4 or earlier. See the server log for
details.
iPython3 notebook is able to load the notebooks though.
Is there something that I can do to resolve this issue?
[W 23:04:29.100 NotebookApp] 404 GET /static/components/MathJax/config/Safe.js?rev=2.5.3 (127.0.0.1) 40.67ms referer=http://localhost:8889/notebooks/Challenges/German%20Credit%20Dataset%20Classification%20-%20Challenge%201/GermanCreditCardClassification.ipynb
[E 23:04:29.377 NotebookApp] Unhandled error in API request
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/notebook/base/handlers.py", line 436, in wrapper
result = yield gen.maybe_future(method(self, *args, **kwargs))
File "/usr/local/lib/python2.7/dist-packages/tornado/gen.py", line 870, in run
value = future.result()
File "/usr/local/lib/python2.7/dist-packages/tornado/concurrent.py", line 215, in result
raise_exc_info(self._exc_info)
File "/usr/local/lib/python2.7/dist-packages/tornado/gen.py", line 230, in wrapper
yielded = next(result)
File "/usr/local/lib/python2.7/dist-packages/notebook/services/contents/handlers.py", line 129, in get
path=path, type=type, format=format, content=content,
File "/usr/local/lib/python2.7/dist-packages/notebook/services/contents/filemanager.py", line 348, in get
model = self._notebook_model(path, content=content)
File "/usr/local/lib/python2.7/dist-packages/notebook/services/contents/filemanager.py", line 308, in _notebook_model
self.mark_trusted_cells(nb, path)
File "/usr/local/lib/python2.7/dist-packages/notebook/services/contents/manager.py", line 447, in mark_trusted_cells
trusted = self.notary.check_signature(nb)
File "/usr/local/lib/python2.7/dist-packages/nbformat/sign.py", line 220, in check_signature
if self.db is None:
File "/usr/local/lib/python2.7/dist-packages/traitlets/traitlets.py", line 439, in __get__
value = self._validate(obj, dynamic_default())
File "/usr/local/lib/python2.7/dist-packages/nbformat/sign.py", line 126, in _db_default
db = sqlite3.connect(self.db_file, **kwargs)
OperationalError: unable to open database file
[E 23:04:29.389 NotebookApp] {
"Accept-Language": "en-US,en;q=0.8",
"Accept-Encoding": "gzip, deflate, sdch",
"Connection": "keep-alive",
"Accept": "application/json, text/javascript, */*; q=0.01",
"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.73 Safari/537.36",
"Dnt": "1",
"Host": "localhost:8889",
"X-Requested-With": "XMLHttpRequest",
"Referer": "http://localhost:8889/notebooks/Challenges/German%20Credit%20Dataset%20Classification%20-%20Challenge%201/GermanCreditCardClassification.ipynb"
}
[E 23:04:29.390 NotebookApp] 500 GET /api/contents/Challenges/German%20Credit%20Dataset%20Classification%20-%20Challenge%201/GermanCreditCardClassification.ipynb?type=notebook&_=1449266668869 (127.0.0.1) 134.27ms referer=http://localhost:8889/notebooks/Challenges/German%20Credit%20Dataset%20Classification%20-%20Challenge%201/GermanCreditCardClassification.ipynb
Ipython Details
Server Information:
You are using Jupyter notebook.
The version of the notebook server is 4.0.2 and is running on:
Python 2.7.6 (default, Jun 22 2015, 17:58:13)
[GCC 4.8.2]
Current Kernel Information:
unable to contact kernel
OS: Linux Mint x64 running on 3.13.0-37-generic Kernel
Answer:
KT.'s solution works. I think that I ran Jupyter as root the first time which caused the files to be inaccessible to non root users as mentioned by KT.
OK, let me take a guess here.
When you ran ipython notebook for the first time it was under root. Consequently, some of the files that Jupyter uses were created as root-owned from the start. It is no surprise now that whenever you run Jupyter as a non-root user, he fails to write to those files which results in the errors that you see.
In the particular situation that you see in the log, Jupyter has problems writing to the nbsignatures.db SQLite database. This file should be located in Jupyter's DATA_DIR, which is normally something like ~/.local/share/jupyter.
If you do not have this directory, you can find it out by running ipython and doing this, for example:
In [1]: from jupyter_core.application import JupyterApp
In [2]: JupyterApp().data_dir
Out[2]: u'/home/ubuntu/.local/share/jupyter'
What you need to do now is to make sure that everything in that directory is owned by the correct user, not by root. This might mean doing something like
# chown -R <yourusername>.<yourusername> ~/.local/share/jupyter
as root.
I just had a similar problem.
Attached beneath are my error message.
[E 19:32:53.893 NotebookApp] Unhandled error in API request
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/notebook/base/handlers.py", line 436, in wrapper
result = yield gen.maybe_future(method(self, *args, **kwargs))
File "/usr/local/lib/python2.7/site-packages/tornado/gen.py", line 870, in run
value = future.result()
File "/usr/local/lib/python2.7/site-packages/tornado/concurrent.py", line 215, in result
raise_exc_info(self._exc_info)
File "/usr/local/lib/python2.7/site-packages/tornado/gen.py", line 230, in wrapper
yielded = next(result)
File "/usr/local/lib/python2.7/site-packages/notebook/services/contents/handlers.py", line 126, in get
path=path, type=type, format=format, content=content,
File "/usr/local/lib/python2.7/site-packages/notebook/services/contents/filemanager.py", line 350, in get
model = self._notebook_model(path, content=content)
File "/usr/local/lib/python2.7/site-packages/notebook/services/contents/filemanager.py", line 310, in _notebook_model
self.mark_trusted_cells(nb, path)
File "/usr/local/lib/python2.7/site-packages/notebook/services/contents/manager.py", line 447, in mark_trusted_cells
trusted = self.notary.check_signature(nb)
File "/usr/local/lib/python2.7/site-packages/nbformat/sign.py", line 220, in check_signature
if self.db is None:
File "/usr/local/lib/python2.7/site-packages/traitlets/traitlets.py", line 439, in __get__
value = self._validate(obj, dynamic_default())
File "/usr/local/lib/python2.7/site-packages/nbformat/sign.py", line 127, in _db_default
self.init_db(db)
File "/usr/local/lib/python2.7/site-packages/nbformat/sign.py", line 139, in init_db
)""")
DatabaseError: database disk image is malformed
[E 19:32:53.895 NotebookApp] {
"Accept-Language": "en,zh-CN;q=0.8,zh;q=0.6,zh-TW;q=0.4,fr;q=0.2,es;q=0.2",
"Accept-Encoding": "gzip, deflate, sdch",
"Connection": "keep-alive",
"Accept": "application/json, text/javascript, */*; q=0.01",
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.94 Safari/537.36",
"Host": "localhost:8888",
"Referer": "http://localhost:8888/notebooks/trajectory_analysis.ipynb",
"X-Requested-With": "XMLHttpRequest"
}
I solved it by removing a file: nbsignatures.db.
Because the location of Jupyter on OSX is system-dependent, first locate the file using:
jupyter --data-dir
Then, remove the nbsignatures.db in it.
Reference: DatabaseError: database disk image is malformed #9293.

Can't add 1+1 in ipython notebook

Since some updates of ipython and efforts to install R in jupyter I just can't even add 1 and 1:
1+1 just yields no output in a python notebook (jupyter).
The console from where the notebook is launched indicates some problem with IPKernel App ...
$ jupyter notebook
[I 16:15:44.792 NotebookApp] Serving notebooks from local directory: /home/jeanpat
[I 16:15:44.792 NotebookApp] 0 active kernels
[I 16:15:44.792 NotebookApp] The IPython Notebook is running at: http://localhost:8888/
[I 16:15:44.792 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
(process:11705): GLib-CRITICAL **: g_slice_set_config: assertion 'sys_page_size == 0' failed
[I 16:15:50.325 NotebookApp] Kernel started: 50c937a7-9ab6-456f-8e65-6d7de55301a6
[IPKernelApp] ERROR | No such comm target registered: ipython.widget
[I 16:17:50.327 NotebookApp] Saving file at /Untitled.ipynb
However 1+1 yields 2 if executed in an ipython console:
~$ ipython
Python 2.7.9 (default, Apr 2 2015, 15:33:21)
Type "copyright", "credits" or "license" for more information.
IPython 4.0.0 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
In [1]: 1+1
Out[1]: 2
Removing ~/.local/share/jupyter fixed the problem. Running with the debug mode (jupyter notebook --debug) showed where jupyter was looking for its kernels. – Jean-Pat 2 hours ago

Swift TempAuth returned 404 when HEAD the account

I am a newbie to Swift, but was trying to install it on my CentOS 6.5 VM. I have done
Installing lasted Swift release (1.12.0) and python-swiftclient (2.0.2) and their dependencies
Preparing and mounting my drive (a separated device formated as xfs) at /svr/node/d1
Creating the rings and adding the device to the rings (account, container, object)
Building the rings, which generates one .ring.gz file for each ring. Placed them in /etc/swift
Configuring hash_path_prefix for proxy
Setting up TempAuth and adding a new user 'myaccount:me' with password 'pa'
Starting proxy and account.
I would expect to successfully do
swift -A http://localhost:8080/auth/v1.0 -U myaccount:me -K pa stat
but the command told me 'Account not found'. To see detailed information, I did
swift --debug -v -A http://localhost:8080/auth/v1.0 -U myaccount:me -K pa stat
the output is
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): localhost
DEBUG:requests.packages.urllib3.connectionpool:"GET /auth/v1.0 HTTP/1.1" 200 0
DEBUG:swiftclient:REQ: curl -i http://localhost:8080/auth/v1.0 -X GET
DEBUG:swiftclient:RESP STATUS: 200 OK
DEBUG:swiftclient:RESP HEADERS: [('content-length', '0'), ('x-trans-id', 'tx88b6b6b71ec14c3393248-00530de039'), ('x-auth-token', 'AUTH_tkdc7e842046e9469da324f2ec82c80a92'), ('x-storage-token', 'AUTH_tkdc7e842046e9469da324f2ec82c80a92'), ('date', 'Wed, 26 Feb 2014 12:38:17 GMT'), ('x-storage-url', 'http://localhost:8080/v1/AUTH_myaccount'), ('content-type', 'text/html; charset=UTF-8')]
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): localhost
DEBUG:requests.packages.urllib3.connectionpool:"HEAD /v1/AUTH_myaccount HTTP/1.1" 404 0
INFO:swiftclient:REQ: curl -i http://localhost:8080/v1/AUTH_myaccount -I -H "X-Auth-Token: AUTH_tkdc7e842046e9469da324f2ec82c80a92"
INFO:swiftclient:RESP STATUS: 404 Not Found
INFO:swiftclient:RESP HEADERS: [('date', 'Wed, 26 Feb 2014 12:38:17 GMT'), ('content-length', '0'), ('content-type', 'text/html; charset=UTF-8'), ('x-trans-id', 'tx553c40e63c69470e9d146-00530de039')]
ERROR:swiftclient:Account HEAD failed: http://localhost:8080:8080/v1/AUTH_myaccount 404 Not Found
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/swiftclient/client.py", line 1192, in _retry
rv = func(self.url, self.token, *args, **kwargs)
File "/usr/lib/python2.6/site-packages/swiftclient/client.py", line 469, in head_account
http_response_content=body)
ClientException: Account HEAD failed: http://localhost:8080:8080/v1/AUTH_myaccount 404 Not Found
Account not found
I figured out my self: in proxy-server.conf, add these two lines
allow_account_management = true
account_autocreate = true