Setting Specific Python in Zeppelin Interpreter - pyspark

What do I need to do beyond setting "zeppelin.pyspark.python" to make a Zeppelin interpreter us a specific Python executable?
Background:
I'm using Apache Zeppelin connected to a Spark+Mesos cluster. The cluster's worked fine for several years. Zeppelin is new and works fine in general.
But I'm unable to import numpy within functions applied to an RDD in pyspark. When I use Python subprocess to locate the Python executable, it shows that the code is being run in the system's Python, not in the virutalenv it needs to be in.
So I've seen a few questions on this issue that say the fix is to set "zeppelin.pyspark.python" to point to the correct python. I've done that and restarted the interpreter a few times. But it is still using the system Python.
Is there something additional I need to do? This is using Zeppelin 0.7.

On an older, custom snapshot build of Zeppelin I've been using on an EMR cluster, I set the following two properties to use a specific virtualenv:
"zeppelin.pyspark.python": "/path/to/bin/python",
"spark.executorEnv.PYSPARK_PYTHON": "/path/to/bin/python"

When you are in your activated venv in python:
(my_venv)$ python
>>> import sys
>>> sys.executable
# http://localhost:8080/#/interpreters
# search for 'python'
# set `zeppelin.python` to output of `sys.executable`

Related

Jupyter for Scala with spylon-kernel without having to install Spark

Based on web search and as highly recommended, I am trying to run Jupyter on my local for Scala (using spylon-kernel).
I was able to create a notebook but while trying to run/play a Scala code snippet, I see this message initializing scala interpreter and in the console, I see this error:
ValueError: Couldn't find Spark, make sure SPARK_HOME env is set or Spark is in an expected location (e.g. from homebrew installation).
I am not planning to install Spark. Is there a way I can still use Jupyter for Scala without installing Spark?
I am new to Jupyter and the ecosystem. Pardon me for the amateur question.
Thanks

The Pyspark always use the system‘s python

We know that a system has two Python:
①system's python
/usr/bin/python
②user's python
~/anaconda3/envs/Python3.6/bin/python3
Now I have a cluster with my Desktop(master) and Laptop(slave).
It's OK for different mode of PysparkShell if I set like this:
export PYSPARK_PYTHON=~/anaconda3/envs/Python3.6/bin/python3
export PYSPARK_DRIVER_PYTHON=~/anaconda3/envs/Python3.6/bin/python3
for both two nodes' ~/.bashrc
However,I want to configure it with jupyter notebook.So I set like this in each node's
~/.bashrc
export PYSPARK_PYTHON=~/anaconda3/envs/Python3.6/bin/python3
export PYSPARK_DRIVER_PYTHON="jupyter"
export PYSPARK_DRIVER_PYTHON_OPTS="notebook"
then I get the log
My Spark version is:
spark-3.0.0-preview2-bin-hadoop3.2
I have read all the answers in
environment variables PYSPARK_PYTHON and PYSPARK_DRIVER_PYTHON
and
different version 2.7 than that in driver 3.6 in jupyter/all-spark-notebook
But no luck.
I guess slave's python2.7 is from system's python.not from anaconda's python.
How to force spark's slave node to use anaconda's python?
Thanks~!
Jupiyter is looking for ipython, you probably only have ipython installed in your system python.
In order to use jupyter in different python version. You need to use python version manager (pyenv), and python environment manager(virtualenv), together you can choose which version of python you are going to use and which environment you are going to install jupyter, and fully isolated python versions and packages.
Install ipykernel in your chosen python environment and install jupyter.
After you finish above step. You need to make sure that the Spark worker will switch to your chosen python version and environment every time Spark ReourceManager launches a worker executor. In order to swtich python version and environment when the Spark worker executor, you need to make sure that a little script ran right after the Spark Resource Manager ssh into worker:
go to the python environment directory
source 'whatever/bin/activate'
After you have done above steps, you should have chosen python version and jupyter ran by Spark worker executor.

How to use the Vegas visualization within a scala-spark jupyter notebook

When using the scala kernel with Vegas we see the nice charts
But when switching to the scala-spark kernel the imports no longer work:
What is the way to fix the imports for the spark kernel?
As described here you'll probably need to tweak your notebook config to pre-load those libraries, so they are available at runtime.
Then you can do a normal import (without the funny $ivy syntax, which actually comes from Ammonite REPL).

Spark Cell magic not found

I have python2 env. on windows 10 with jupyter notebook.
after following instructions in this tutorial I managed to install spark on windows 10:
https://medium.com/#GalarnykMichael/install-spark-on-windows-pyspark-4498a5d8d66c
but when I try to run cell magic for SQL I get the following error :
ERROR:root:Cell magic %%sql not found.
when I used %lsmagic I could not find sql cell magic among them.
also I noticed there was no option for pyspark kernel when starting new notebook in Jupyter.
Are you trying to use SQL or Spark-SQL? I've used iPython-SQL which was great, and there's also SparkMagic which sounds like what you're looking for. Try installing SparkMagic, which does use %%sql magic.

PySpark and PDB don't seem to mix

I'm building stand alone python programs that will use pyspark (and elasticsearch-hadoop connector). I am also addicted to the Python Debugger (PDB) and want to be able to step through my code.
It appears I can't run pyspark with the PDB like I normally do
./pyspark -m pdb testCode.py
I get an error "pyspark does not support any application options"
Is it possible to run pyspark code from the standard python interpreter? or do i need to give up the pdb?
I also saw online that I need to include py4j-0.9-src.zip in my PYTHONPATH. When I do that, I can use the python interpreter and step through my code, but I get an error "Py4JavaError: Py4JJava...t id=o18)" when it runs any of the pyspark code. That error seemed to indicate that I wasn't really interacting with spark.
How do I approach this?