I have really liked Orange for the most part, but haven't been able to get it to work with my sqlite data files. I've tried using the SQL Select, but it doesn't seem to take any connect string I pass it:
e.g.
sqlite:///Users/me/test.db/
The correct path to test.db is: /Users/me/test.db
Always see the following error:
AttributeError Traceback (most recent call last):
File "/Applications/Orange.app/Contents/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Orange/OrangeWidgets/Utilities/OWDatabasesPack.py", line 120, in _error
u"Error: {0}".format(error.errorString())
AttributeError: 'NetworkError' object has no attribute 'errorString'
sqlite3:///Users/zach/test.db/
TypeError Traceback (most recent call last):
File "/Applications/Orange.app/Contents/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Orange/OrangeWidgets/Prototypes/OWSQLSelect.py", line 84, in connectDB
self.sqlReader.connect(connectString)
File "/Applications/Orange.app/Contents/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Orange/utils/init.py", line 214, in wrap_call
return func(*args, **kwargs)
File "/Applications/Orange.app/Contents/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Orange/data/sql.py", line 195, in connect
(self.quirks, self.conn) = _connection(uri)
TypeError: 'NoneType' object is not iterable
Orange never supported sqlite.
You are using Orange 2, which has been deprecated for some years now.
Related
I was using xgboost4j-0.90.jar in Pyspark alongside its working version of sparkxgb.zip. Everything was working well till i decided to update to xgboost4j-1.1.2.jar. Since i'm using scala 2.11 and i can't change the scala version for other reasons, the most updated version of xgboost4j i could get that was compatible with scala 2.11 was xgboost4j-1.1.2.
The problem i found now is that when on Pyspark i want to build a model using the XGBoostClassifier() class i get the following error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-5-56249c4ed3fb> in <module>()
----> 1 XGBoostClassifier()
/opt/cloudera/parcels/CDH-6.3.4-1.cdh6.3.4.p0.6626826/lib/spark/python/pyspark/__init__.py in wrapper(self, *args, **kwargs)
108 raise TypeError("Method %s forces keyword arguments." % func.__name__)
109 self._input_kwargs = kwargs
--> 110 return func(self, **kwargs)
111 return wrapper
112
/tmp/spark-da5b0f7f-7899-450b-a3bc-f9359c37ac9c/userFiles-805b892a-61df-41ed-83eb-b7127f3f7765/sparkxgb.zip/sparkxgb/xgboost.py in __init__(self, alpha, baseMarginCol, baseScore, checkpointInterval, checkpointPath, colsampleBylevel, colsampleBytree, contribPredictionCol, customEval, customObj, eta, evalMetric, featuresCol, gamm, growPolicy, labelCol, reg_lambda, lambdaBias, leafPredictionCol, maxBin, maxDeltaStep, maxDepth, minChildWeight, missing, normalizeType, nthread, numClass, numEarlyStoppingRounds, numRound, numWorkers, objective, predictionCol, probabilityCol, rateDrop, rawPredictionCol, sampleType, scalePosWeight, seed, silent, sketchEps, skipDrop, subsample, thresholds, timeoutRequestWorkers, trainTestRatio, treeLimit, treeMethod, useExternalMemory, weightCol)
64
65 super(XGBoostClassifier, self).__init__()
---> 66 self._java_obj = self._new_java_obj("ml.dmlc.xgboost4j.scala.spark.XGBoostClassifier", self.uid)
67 self._create_params_from_java()
68 self._setDefault() # We get our defaults from the embedded Scala object, so no need to specify them here.
/opt/cloudera/parcels/CDH-6.3.4-1.cdh6.3.4.p0.6626826/lib/spark/python/pyspark/ml/wrapper.py in _new_java_obj(java_class, *args)
65 java_obj = getattr(java_obj, name)
66 java_args = [_py2java(sc, arg) for arg in args]
---> 67 return java_obj(*java_args)
68
69 #staticmethod
TypeError: 'JavaPackage' object is not callable
I researched about it and i found this question here that seems to be exaclty the same issue, The user who posted the question also posted the answer and he said the problem was the version of the sparkxgb.zip wrapper he was using, that was made for an ancient version of the xgboost4j package.
I tried then to look for my correct version of the sparkxgb.zip wrapper but after hours of Google i wasn't able to find a site where all the versions were listed indicating for which version of xgboost4j they were working. The only i found was standalone links that directly lead to a sparkxgb.zip file that i was incapable to figure out if it was the correct for my version of xgboost4j.
Could someone tell me if this error i'm getting is related to the sparkxgb.zip file? If so, where could i get the right one for xgboost4j-1.1.2.jar to use with Pyspark? If, the problem was not related to the zip file wrapper, could someone help me in fixing this and make xgboost4j-1.1.2 work fine for me please?
Thank you very much in advance.
I've been at this for literally a day; none of the Youtube videos go beyond a very basic example. Please help. I'm sure Im missing something really basic here.
Would it change things if the input boxes are embedded in a table? Here is my code:
from robobrowser import RoboBrowser
br = RoboBrowser(history=True, parser = 'html.parser', user_agent='Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11')
br.open('https://fbcad.org/Advanced-Search')
form = br.get_form(id='Form')
form['NameFirst'] = "john"
form['NameLast'] = "smith"
br.submit_form(form)
here is the error:
C:\Python\Python37\python.exe C:/Python/Python37/FBCAD.py
Traceback (most recent call last):
File "C:/Python/Python37/FBCAD.py", line 7, in <module>
form['NameFirst'] = "john"
File "C:\Python\Python37\lib\site-packages\robobrowser\forms\form.py", line 216, in __setitem__
self.fields[key].value = value
File "C:\Python\Python37\lib\site-packages\werkzeug\datastructures.py", line 784, in __getitem__
raise exceptions.BadRequestKeyError(key)
werkzeug.exceptions.BadRequestKeyError: 400 Bad Request: The browser (or proxy) sent a request that this server could not understand.
Wow thanks!
As it turns out the answer is to not ask a question on this forum and instead spend the weekend learning Selenium as an alternative. Thanks stackoverflow! Thanks Robobrowser!
I am doing augmentation to do segmentation task with caffe. The Python Layer that I have written is raising an error. The layer definition is like this:
layer {
name: 'myaug'
type: 'Python'
bottom: 'data'
bottom: 'label'
top: 'data'
top: 'label'
python_param {
module: 'augLayer'
layer: 'CompactData'
}
}
this is the net drawing:
net drawing
The error seems to be related to numpy:
File "/home/usersc/caffe/python/caffe/pycaffe.py", line 11, in <module>
import numpy as np
File "/home/usersc/anaconda2/envs/mycaffe/lib/python2.7/site-packages/numpy/__init__.py", line 142, in <module>
from . import add_newdocs
File "/home/usersc/anaconda2/envs/mycaffe/lib/python2.7/site-packages/numpy/add_newdocs.py", line 13, in <module>
from numpy.lib import add_newdoc
File "/home/usersc/anaconda2/envs/mycaffe/lib/python2.7/site-packages/numpy/lib/__init__.py", line 8, in <module>
from .type_check import *
File "/home/usersc/anaconda2/envs/mycaffe/lib/python2.7/site-packages/numpy/lib/type_check.py", line 11, in <module>
import numpy.core.numeric as _nx
File "/home/usersc/anaconda2/envs/mycaffe/lib/python2.7/site-packages/numpy/core/__init__.py", line 22, in <module>
from . import _internal # for freeze programs
File "/home/usersc/anaconda2/envs/mycaffe/lib/python2.7/site-packages/numpy/core/_internal.py", line 14, in <module>
import ctypes
File "/home/usersc/anaconda2/envs/mycaffe/lib/python2.7/ctypes/__init__.py", line 7, in <module>
from _ctypes import Union, Structure, Array
ImportError: /home/usersc/anaconda2/envs/mycaffe/lib/python2.7/lib-dynload/_ctypes.so: undefined symbol: _PySlice_Unpack
I am not sure, I am thinking should I add a MemoryData layer to keep the augmented data for me as in this link since both data and label images should be sent synchronously. Is it like that Data Layer memory should be cleared?
You have a problem importing numpy: this has nothing to do with your code/layer, your code was not even run yet.
Make sure numpy is properly installed on your machine and that your $PYTHONPATH environment points to the right places.
Regarding memory: the way you defined your layer, it performs the augmentations "in-place", that is, it changes data and label blobs instead of making copies of the augmented inputs. Make sure you are okay with this kind of behavior. Furthermore, I don't think you need a "MemoryData" layer to carry out your augmentations, the "Python" layer should be enough.
At first, I use the api,
KafkaUtils.createDirectStream(ssc=ssc,
topics=topics,
kafkaParams={"metadata.broker.list": brokers})
to consume kafka message, this way it works, but it always consume from the latest offset which is not what I want, so I change the api to
KafkaUtils.createDirectStream(ssc=ssc,
topics=topics,
kafkaParams={"metadata.broker.list": brokers},
fromOffsets=fromOffset,
messageHandler=messageHnadler)
which can set the fromOffset, but when I run the same programme I get the error below:
File
"/Users/peterpan/Documents/software/spark-1.6.2/python/lib/pyspark.zip/pyspark/streaming/kafka.py",
line 138, in createDirectStream AttributeError: 'TopicPartition'
object has no attribute '_jTopicAndPartition'
am I missing something?
the problem was using wrong type of fromOffsets,
use
pyspark.streaming.kafka.TopicAndPartition
instead of
kafka.structs.TopicPartition
I am getting below error, when i'm try to import products from CSV file.
I have search for this error on SO and google but no luck,
Please guide me if any one have idea.
Fatal error: Call to a member function getImage() on a non-object in /public_html/app/code/core/Mage/Catalog/Model/Convert/Adapter/Product.php on line <b>812</b>
Image for imported product in /media/import folder.