I am running Pycharm on my MacBook.
Client settings:
Python Interpreter -> Python 3.7 (dtabricks-connect-6.4)
Cluster settings:
Databricks Runtime Version -> 6.4 (includes Apache Spark 2.4.5, Scala 2.11)
It worked well for months but suddenly, without any updates made, I cant run my python script from Pycharm against databricks cluster anymore.
The Error is ...
Caused by: `java.lang.IllegalArgumentException: The cluster is running server version `dbr-6.4` but this client only supports Set(dbr-5.5)....`
I restarted Pycharm, I witched back and for the interpreter, I restarted the cluster and I even restarted my MacBook but it didnt helped. The Error message is simply false, because both, cluster and client are using the SAME version. I can see that when I execute my python script that the cluster is being started but fails at the end.
pyenv activate databricks-connect-6-4
pip freeze
Cython==0.29.21
databricks-connect==6.4.0
numpy==1.19.2
pandas==1.0.1
py4j==0.10.7
pyarrow==0.13.0
pycountry==20.7.3
python-dateutil==2.8.1
pytz==2020.1
six==1.15.0
It looks like that it was caused by some internal changes on the server side, and this prevents databricks-connect from working. You can always disable this check by setting environment variable DEBUG_IGNORE_VERSION_MISMATCH to value 1 (export DEBUG_IGNORE_VERSION_MISMATCH=1 in the console before executing databricks-connect test) - you can also set this environment variable in the PyCharm.
Update: it should be fixed by Databricks Connect 6.4.2 that was just released.
Thanks for raising this. The Databricks-Connect team has acknowledged this issue and we are working on a patch to address this issue. Will keep you posted. In the meantime you can use the DEBUG_IGNORE_VERSION_MISMATCH as Alex pointed out.
Update: A compatible db-connect client has been released to fix this problem: version 6.4.2 (https://pypi.org/project/databricks-connect/6.4.2/, install with: pip install databricks-connect==6.4.2).
Related
Error on Terminal while running it locally
I am getting this error while running it locally. I have created environment using conda.Python=3.8 and Pycaret =2.3.5
Please help me to resolve it.
I created new enviornment to run it and it still didn't work.
Based on that error message, seems like you don't have pycaret properly installed. Do conda list to see what packages are installed.
I am trying to use snowpark(0.6.0) via Jupiter notebooks(after installing Scala almond kernel). I am using Windows laptop. Had to change the examples here a bit to work around windows. Following documentation here
https://docs.snowflake.com/en/developer-guide/snowpark/quickstart-jupyter.html
Ran into this error
java.lang.NoClassDefFoundError: Could not initialize class com.snowflake.snowpark.Session$
ammonite.$sess.cmd5$Helper.<init>(cmd5.sc:6)
ammonite.$sess.cmd5$.<init>(cmd5.sc:7)
ammonite.$sess.cmd5$.<clinit>(cmd5.sc:-1)
Also tried earlier with IntelliJ IDE,got bunch of errors with missing dependencies for log4j etc.
Can I get help.
Have not set it up in Widows but only with Linux.
You have to do the setup steps for each notebook that is going to use Snowpark (part from installing the kernel).
It's important to make sure you are using a unique folder for each notebook, as in step 2 in the guide.
What was the output of the import $ivy.com.snowflake:snowpark:0.6.0?
I have installed WinPython and want to use Spyder. I use pip and virtual environments. I have followed the instructions here modular approach. Everything works just dandy until the very last instruction "Start a new IPython console (in Spyder). All packages installed in your venv environment should be available there.".
It get error Your Python environment or installation doesn't have the spyder‑kernels module or the right version of it installed (>= 1.9.0 and < 1.10.0). Without this module is not possible for Spyder to create a console for you.
But I installed spyder-kernals in my venv, I can literally see them there, I set the path the the python installed in the venv, everything should work, but it doesn't!
Any thoughts?
I asked CAM Gerlach as suggested, and he spotted my error very quickly. The instructions at modular approach are correct except they say pip install spyder-kernels==0.* which I took literally. In fact as per the error message you need to use later versions, so I used pip install spyder-kernels==1.10 and it fixed it.
You may have to ask to "C.A.M. Gerlach" if he has an update on the procedure: Spyder has evolved a bit with Spyder-4.
I'm trying to get Python 2.7 working on my OpsWorks instance but I keep running into errors starting up.
My OpsWork stack is set up with Chef version 11.10 and Berkshelf version 3.2.0.
My metadata.rb has the following in it:
depends "poise-python"
depends "apt", ">= 1.8.2"
My Berksfile is set up with:
source "https://supermarket.chef.io"
cookbook 'poise-python'
cookbook 'apt'
Every time I launch I keep getting the following error and I'm not sure how to resolve it:
Halite is not compatible with no_lazy_load false, please set
no_lazy_load true in your Chef configuration file.
I tried adding a chef/configuration.rb file to set no_lazy_load to true but it doesn't seem to be working. Frankly I'm new to OpsWorks and Chef so I may be missing something very basic.
More Info
The stack I'm taking over originally referenced python instead of poise-python but I had switched from that to resolve a different error (but, I guess, related) when I tried to run with that:
This resource is written with Chef 12.5 custom resources, and requires
at least Chef 12.0 used with the compat_resource cookbook, it will not
work with Chef 11.x clients, and those users must pin their cookbooks
to older versions or upgrade.
I tried pinning to an older version of python but still couldn't get it to work. Basically, I know this instance can run (previous maintainer had it going) but I'm not sure what I'm missing.
After some Googling, I figured out how to make this work without upgrading Chef version. I added the following line to my Berksfile:
cookbook 'build-essential', '= 3.2.0'
My cookbooks aren't compatible with Chef 11, you'll have to upgrade your stack to Chef 12.
I have a Jython script properly running on a Windows 7 machine with PostgreSQL 9. Trying to run the same script on RHE5 Linux with PostgreSQL 8.2 yields the error
zxJDBC.DatabaseError: driver [org.postgresql.Driver] not found
I tried running the script like so:
CLASSPATH=$CLASSPATH:/path/to/postgresql.jar /path/to/jython /path/to/script.py
I also tried setting PYTHONPATH and JYTHONPATH similarly, all yielding the same error.
What am I doing wrong?
I think I resolved this. The issue was that I used the wrong driver version. Even though psql --version says 8.2 and even though that's the jar version I downloaded, using the 7.4 version of the driver resolved the issue.