Is there a unicodedata module for IronPython? - unicode

I'm trying to get the generic sample code for the selenium2 Sauce OnDemand service working with IronPython, for some test work I'm doing, and I've run into a problem I can't quite figure out.
First, Here's the environment:
Windows 7 Home Premium, 64bit.
IronPython 2.7.0.40 on .Net 4.0.30319.225
My path:
>>> sys.path
['.', 'C:\\users\\me\\scripts\\python', 'C:\\Program Files (x86)\\Microsoft Visual Studio 10.0\\Common7\\IDE\\Extensions\\Microsoft\\IronStudio\\0.4\\Lib', 'C:\\Program Files (x86)\\Microsoft Visual Studio 10.0\\Common7\\IDE\\Extensions\\Microsoft\\IronStudio\\0.4\\DLLs', 'C:\\opt\\win\\ipy\\Lib', 'C:\\opt\\win\\ipy\\DLLs', 'C:\\opt\\win\\ipy']
I'm aware that IronPython has issues using compressed eggs, so I've extracted the following libraries into the \Lib directory on the sys.path:
selenium (2.0b4dev)
rdflib (3.1.0)
Now, the sample code from Sauce Labs:
import unittest
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
class Selenium2OnSauce(unittest.TestCase):
def setUp(self):
desired_capabilities = dict(platform="WINDOWS",
browserName="firefox",
version="3.6",
name="Hello, Selenium 2!")
self.driver = webdriver.Remote(
desired_capabilities=desired_capabilities,
command_executor="http://me:my-site-access-key#ondemand.saucelabs.com:80/wd/hub")
def test_sauce(self):
self.driver.get('http://example.saucelabs.com')
assert "Sauce Labs" in self.driver.title
def tearDown(self):
self.driver.quit()
if __name__ == '__main__':
unittest.main()
Here's the error I'm getting:
Traceback (most recent call last):
File "selenium2_test.py", line 3, in <module>
File "c:\opt\win\ipy\Lib\selenium\webdriver\__init__.py", line 18, in <module>
File "c:\opt\win\ipy\Lib\selenium\webdriver\firefox\webdriver.py", line 24, in <module>
File "c:\opt\win\ipy\Lib\selenium\webdriver\firefox\firefox_profile.py", line 23, in <module>
File "C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\Extensions\Microsoft\IronStudio\0.4\Lib\rdflib\__init__.py", line 65, in <module>
File "C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\Extensions\Microsoft\IronStudio\0.4\Lib\rdflib\namespace.py", line 282, in <module>
ImportError: No module named unicodedata
I've tried searching for packages with unicodedata in them (such as FePY), but none of them seem to work. I've tried copying the .pyd from my Python27 installation, but that didn't work either.
Is this something that's not yet available in IronPython 2.7? Alternatively, is there a library or namespace I can reference to accomplish the same task?
If not, I guess I'll have to go without selenium2 until the Iron guys get a unicodedata for IP27. :(
Thanks,
Greg.

Alas, the unicodedata module is currently not included in IronPython. But fear not! For the upcoming 2.7.1 release will have it, and all of your troubles shall be no more.
Sorry about that. As for when 2.7.1 will be released, I'm thinking early June. You can check https://github.com/IronLanguages/main/commit/5395af28b5794b0acf982ab87d17466925ab819f for the patch, but it's fairly large and fiddly because of the included Unicode database.

See #Micheal Greene's comments on the original question. Here is the module with the edits as directed:
# Copyright (c) 2006 by Seo Sanghyeon
# 2006-07-13 sanxiyn Created
# 2006-07-19 sanxiyn Added unidata_version, normalize
# Modified 2011 - Greg Gauthier - To provide needed functionality for rdflib
# and other utilities, in IronPython 2.7.0
unidata_version = '3.2.0'
# --------------------------------------------------------------------
# Normalization
from System import String
from System.Text import NormalizationForm
def normalize(form, string):
return String.Normalize(string, _form_mapping[form])
_form_mapping = {
'NFC': NormalizationForm.FormC,
'NFKC': NormalizationForm.FormKC,
'NFD': NormalizationForm.FormD,
'NFKD': NormalizationForm.FormKD,
}
# --------------------------------------------------------------------
# Character properties
from System.Globalization import CharUnicodeInfo
def _handle_default(function):
def wrapper(unichr, default=None):
result = function(unichr)
if result != -1:
return result
if default is None:
raise ValueError()
return default
return wrapper
decimal = _handle_default(CharUnicodeInfo.GetDecimalDigitValue)
digit = _handle_default(CharUnicodeInfo.GetDigitValue)
numeric = _handle_default(CharUnicodeInfo.GetNumericValue)
def category(unichr):
uc = CharUnicodeInfo.GetUnicodeCategory(unichr)
return _category_mapping[int(uc)]
_category_mapping = {
0: 'Lu',
1: 'Ll',
2: 'Lt',
3: 'Lm',
4: 'Lo',
5: 'Mn',
6: 'Mc',
7: 'Me',
8: 'Nd',
9: 'Nl',
10: 'No',
11: 'Zs',
12: 'Zl',
13: 'Zp',
14: 'Cc',
15: 'Cf',
16: 'Cs',
17: 'Co',
18: 'Pc',
19: 'Pd',
20: 'Ps',
21: 'Pe',
22: 'Pi',
23: 'Pf',
24: 'Po',
25: 'Sm',
26: 'Sc',
27: 'Sk',
28: 'So',
29: 'Cn',
}
# added for rdflib/ironpython
def decomposition(unichr):
return String.Normalize(unichr.ToString(), NormalizationForm.FormD)

Related

Receiving an error when trying to run chatterbot on pi

I am new to working with Chatterbot. I am attempting to run chatterbot on my raspberry pi 4. After installing it, I attempted a basic program from chatterbot's documentation website:
from chatterbot import ChatBot
from chatterbot.trainers import ChatterBotCorpusTrainer
chatbot = ChatBot('Bob')
######### Create a new trainer for the chatbot
trainer = ChatterBotCorpusTrainer(chatbot)
######### Train the chatbot based on the english corpus
trainer.train("chatterbot.corpus.english")
# Get a response to an input statement
responce = chatbot.get_response("Hello, how are you today?")
print(responce)
When I run this program I get this error message:
Traceback (most recent call last):
File "/home/pi/Matthew.codes/JenkinsProject/chatbot_test.py", line 4, in
chatbot = ChatBot('Bob')
File "/home/pi/.local/lib/python3.7/site-packages/chatterbot/chatterbot.py", line 28, in init
self.storage = utils.initialize_class(storage_adapter, **kwargs)
File "/home/pi/.local/lib/python3.7/site-packages/chatterbot/utils.py", line 33, in initialize_class
return Class(*args, **kwargs)
File "/home/pi/.local/lib/python3.7/site-packages/chatterbot/storage/sql_storage.py", line 20, in init
super().init(**kwargs)
File "/home/pi/.local/lib/python3.7/site-packages/chatterbot/storage/storage_adapter.py", line 23, in init
'tagger_language', languages.ENG
File "/home/pi/.local/lib/python3.7/site-packages/chatterbot/tagging.py", line 20, in init
import spacy
File "/home/pi/.local/lib/python3.7/site-packages/spacy/init.py", line 6, in
from .errors import setup_default_warnings
File "/home/pi/.local/lib/python3.7/site-packages/spacy/errors.py", line 2, in
from .compat import Literal
File "/home/pi/.local/lib/python3.7/site-packages/spacy/compat.py", line 38, in
from thinc.api import Optimizer # noqa: F401
File "/home/pi/.local/lib/python3.7/site-packages/thinc/api.py", line 2, in
from .initializers import normal_init, uniform_init, glorot_uniform_init, zero_init
File "/home/pi/.local/lib/python3.7/site-packages/thinc/initializers.py", line 4, in
from .backends import Ops
File "/home/pi/.local/lib/python3.7/site-packages/thinc/backends/init.py", line 7, in
from .ops import Ops
File "/home/pi/.local/lib/python3.7/site-packages/thinc/backends/ops.py", line 15, in
from .cblas import CBlas
File "thinc/backends/cblas.pyx", line 1, in init thinc.backends.cblas
File "/home/pi/.local/lib/python3.7/site-packages/blis/init.py", line 3, in
from .cy import init
ImportError: /home/pi/.local/lib/python3.7/site-packages/blis/cy.cpython-37m-arm-linux-gnueabihf.so: undefined symbol: __atomic_load_8
I have tried adding storage adapters to the name class, but that did not work. As far as I can tell, I have installed all the dependencies.

Error creating universal sentence encoder embeddings using beam & tf transform

I have a simple beam pipline that takes some text and gets embeddings using universal sentence encoder with tf transform. Very similar to the demo made using tf 1.
import tensorflow as tf
import apache_beam as beam
import tensorflow_transform.beam as tft_beam
import tensorflow_transform.coders as tft_coders
from apache_beam.options.pipeline_options import PipelineOptions
import tempfile
model = None
def embed_text(text):
import tensorflow_hub as hub
global model
if model is None:
model = hub.load(
'https://tfhub.dev/google/universal-sentence-encoder/4')
embedding = model(text)
return embedding
def get_metadata():
from tensorflow_transform.tf_metadata import dataset_schema
from tensorflow_transform.tf_metadata import dataset_metadata
metadata = dataset_metadata.DatasetMetadata(dataset_schema.Schema({
'id': dataset_schema.ColumnSchema(
tf.string, [], dataset_schema.FixedColumnRepresentation()),
'text': dataset_schema.ColumnSchema(
tf.string, [], dataset_schema.FixedColumnRepresentation())
}))
return metadata
def preprocess_fn(input_features):
text_integerized = embed_text(input_features['text'])
output_features = {
'id': input_features['id'],
'embedding': text_integerized
}
return output_features
def run(pipeline_options, known_args):
argv = None # if None, uses sys.argv
pipeline_options = PipelineOptions(argv)
pipeline = beam.Pipeline(options=pipeline_options)
with tft_beam.Context(temp_dir=tempfile.mkdtemp()):
articles = (
pipeline
| beam.Create([
{'id':'01','text':'To be, or not to be: that is the question: '},
{'id':'02','text':"Whether 'tis nobler in the mind to suffer "},
{'id':'03','text':'The slings and arrows of outrageous fortune, '},
{'id':'04','text':'Or to take arms against a sea of troubles, '},
]))
articles_dataset = (articles, get_metadata())
transformed_dataset, transform_fn = (
articles_dataset
| 'Extract embeddings' >> tft_beam.AnalyzeAndTransformDataset(preprocess_fn)
)
transformed_data, transformed_metadata = transformed_dataset
_ = (
transformed_data | 'Write embeddings to TFRecords' >> beam.io.tfrecordio.WriteToTFRecord(
file_path_prefix='{0}'.format(known_args.output_dir),
file_name_suffix='.tfrecords',
coder=tft_coders.example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema),
num_shards=1
)
)
result = pipeline.run()
result.wait_until_finished()
python 3.6.8, tf==2.0, tf_transform==0.15, apache-beam[gcp]==0.16 (I tried various compatible combos from https://github.com/tensorflow/transform)
I am getting an error when tf_transform calls the graph analyser:
...
File "/Users/justingrace/.pyenv/versions/hlx36/lib/python3.6/site-packages/tensorflow_transform/beam/impl.py", line 462, in process
lambda: self._make_graph_state(saved_model_dir))
File "/Users/justingrace/.pyenv/versions/hlx36/lib/python3.6/site-packages/tfx_bsl/beam/shared.py", line 221, in acquire
return _shared_map.acquire(self._key, constructor_fn)
File "/Users/justingrace/.pyenv/versions/hlx36/lib/python3.6/site-packages/tfx_bsl/beam/shared.py", line 184, in acquire
result = control_block.acquire(constructor_fn)
File "/Users/justingrace/.pyenv/versions/hlx36/lib/python3.6/site-packages/tfx_bsl/beam/shared.py", line 87, in acquire
result = constructor_fn()
File "/Users/justingrace/.pyenv/versions/hlx36/lib/python3.6/site-packages/tensorflow_transform/beam/impl.py", line 462, in <lambda>
lambda: self._make_graph_state(saved_model_dir))
File "/Users/justingrace/.pyenv/versions/hlx36/lib/python3.6/site-packages/tensorflow_transform/beam/impl.py", line 438, in _make_graph_state
self._exclude_outputs, self._tf_config)
File "/Users/justingrace/.pyenv/versions/hlx36/lib/python3.6/site-packages/tensorflow_transform/beam/impl.py", line 357, in __init__
tensor_inputs = graph_tools.get_dependent_inputs(graph, inputs, fetches)
File "/Users/justingrace/.pyenv/versions/hlx36/lib/python3.6/site-packages/tensorflow_transform/graph_tools.py", line 686, in get_dependent_inputs
sink_tensors_ready)
File "/Users/justingrace/.pyenv/versions/hlx36/lib/python3.6/site-packages/tensorflow_transform/graph_tools.py", line 499, in __init__
table_init_op, graph_analyzer_for_table_init, translate_path_fn)
File "/Users/justingrace/.pyenv/versions/hlx36/lib/python3.6/site-packages/tensorflow_transform/graph_tools.py", line 560, in _get_table_init_op_source_info
if table_init_op.type not in _TABLE_INIT_OP_TYPES:
AttributeError: 'Tensor' object has no attribute 'type' [while running 'Extract embeddings/TransformDataset/Transform']
Exception ignored in: <bound method CapturableResourceDeleter.__del__ of <tensorflow.python.training.tracking.tracking.CapturableResourceDeleter object at 0x14152fbe0>>
Traceback (most recent call last):
File "/Users/justingrace/.pyenv/versions/hlx36/lib/python3.6/site-packages/tensorflow_core/python/training/tracking/tracking.py", line 190, in __del__
File "/Users/justingrace/.pyenv/versions/hlx36/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 3872, in as_default
File "/Users/justingrace/.pyenv/versions/3.6.8/lib/python3.6/contextlib.py", line 159, in helper
TypeError: 'NoneType' object is not callable
It appears like the graph analyser is expecting a list of operations with a type attribute but it is receiving a tensor. I can't grasp why this error is occuring other than a bug in the graph analyzer or a compatibility issue with tfx_bsl (there seem to be issues with pyarrow 0.14 so I have downgraded to 0.13)
Output of pip freeze:
absl-py==0.8.1
annoy==1.12.0
apache-beam==2.16.0
appnope==0.1.0
astor==0.8.1
astunparse==1.6.3
attrs==19.1.0
avro-python3==1.9.1
backcall==0.1.0
bleach==3.1.0
cachetools==3.1.1
certifi==2019.11.28
chardet==3.0.4
crcmod==1.7
cymem==1.31.2
cytoolz==0.9.0.1
decorator==4.4.1
defusedxml==0.6.0
dill==0.3.0
docopt==0.6.2
en-core-web-lg==2.0.0
en-coref-lg==3.0.0
en-ner-trained==2.0.0
entrypoints==0.3
fastavro==0.21.24
fasteners==0.15
flashtext==2.7
future==0.18.2
fuzzywuzzy==0.16.0
gast==0.2.2
google-api-core==1.16.0
google-apitools==0.5.28
google-auth==1.11.0
google-auth-oauthlib==0.4.1
google-cloud-bigquery==1.17.1
google-cloud-bigtable==1.0.0
google-cloud-core==1.3.0
google-cloud-datastore==1.7.4
google-cloud-pubsub==1.0.2
google-pasta==0.1.8
google-resumable-media==0.4.1
googleapis-common-protos==1.51.0
grpc-google-iam-v1==0.12.3
grpcio==1.24.0
h5py==2.10.0
hdfs==2.5.8
httplib2==0.12.0
idna==2.8
importlib-metadata==1.5.0
ipykernel==5.1.4
ipython==7.12.0
ipython-genutils==0.2.0
ipywidgets==7.5.1
jedi==0.16.0
Jinja2==2.11.1
jsonpickle==1.2
jsonschema==3.2.0
jupyter==1.0.0
jupyter-client==5.3.4
jupyter-console==6.1.0
jupyter-core==4.6.2
Keras-Applications==1.0.8
Keras-Preprocessing==1.1.0
lxml==4.2.1
Markdown==3.2.1
MarkupSafe==1.1.1
mistune==0.8.4
mock==2.0.0
monotonic==1.5
more-itertools==8.2.0
msgpack==0.6.2
msgpack-numpy==0.4.4
murmurhash==0.28.0
nbconvert==5.6.1
nbformat==5.0.4
networkx==2.1
nltk==3.4.5
notebook==6.0.3
numpy==1.18.1
oauth2client==3.0.0
oauthlib==3.1.0
opt-einsum==3.1.0
packaging==20.1
pandas==0.23.0
pandocfilters==1.4.2
parso==0.6.1
pathlib2==2.3.5
pbr==5.4.4
pexpect==4.8.0
pickleshare==0.7.5
plac==0.9.6
pluggy==0.13.1
preshed==1.0.1
prometheus-client==0.7.1
prompt-toolkit==3.0.3
proto-google-cloud-datastore-v1==0.90.4
protobuf==3.11.3
psutil==5.6.7
ptyprocess==0.6.0
py==1.8.1
pyahocorasick==1.4.0
pyarrow==0.13.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pydot==1.4.1
Pygments==2.5.2
PyHamcrest==1.9.0
pymongo==3.10.1
pyparsing==2.4.6
pyrsistent==0.15.7
pytest==5.3.5
python-dateutil==2.8.0
python-Levenshtein==0.12.0
pytz==2019.3
PyYAML==3.13
pyzmq==18.1.1
qtconsole==4.6.0
regex==2017.4.5
repoze.lru==0.7
requests==2.22.0
requests-oauthlib==1.3.0
rsa==4.0
scikit-learn==0.19.1
scipy==1.4.1
Send2Trash==1.5.0
six==1.14.0
spacy==2.0.12
tb-nightly==2.2.0a20200217
tensorboard==2.0.2
tensorflow==2.0.0
tensorflow-estimator==2.0.1
tensorflow-hub==0.6.0
tensorflow-metadata==0.15.2
tensorflow-serving-api==2.1.0
tensorflow-transform==0.15.0
termcolor==1.1.0
terminado==0.8.3
testpath==0.4.4
textblob==0.15.1
tf-estimator-nightly==2.1.0.dev2020012309
tf-nightly==2.2.0.dev20200217
tfx-bsl==0.15.0
thinc==6.10.3
toolz==0.10.0
tornado==6.0.3
tqdm==4.23.3
traitlets==4.3.3
typing==3.7.4.1
typing-extensions==3.7.4.1
ujson==1.35
Unidecode==1.0.22
urllib3==1.25.8
wcwidth==0.1.8
webencodings==0.5.1
Werkzeug==1.0.0
Whoosh==2.7.4
widgetsnbextension==3.5.1
wrapt==1.11.2
zipp==2.2.0
This could be an underlying issue according to this github post. Try using an updated version of tensorflow (2.1.0), or maybe even an updated version of your keras packages.

Issue with Geodjango OGR failure - GDAL Exception

I have made it to which the model in django is built, but once adding data and clicking save my app crash
i get GDAL exception - OGR Failure, and it highlights in the crash page " {{ field.field }}" as the reason for the error.
here's screen shot Admin view for adding data to the model
i get this right away after clicking save
Anyone been through this? any help?
Thanks
Update :
I'm getting this error :
[12/Apr/2019 14:38:46] "GET /static/admin/img/gis/move_vertex_off.svg HTTP/1.1"
200 1129
[12/Apr/2019 14:39:14] "GET /static/admin/img/gis/move_vertex_on.svg HTTP/1.1" 2
00 1129
GDAL_ERROR 4: b'Unable to open EPSG support file gcs.csv.\nTry setting the GDAL_
DATA environment variable to point to the\ndirectory containing EPSG csv files.'
Internal Server Error: /admin/trial2/shop/add/
Traceback (most recent call last):
File "C:\Users\lgcge\Documents\copypostgis\venv\lib\site-packages\django\core\
handlers\exception.py", line 34, in inner
response = get_response(request)
File "C:\Users\lgcge\Documents\copypostgis\venv\lib\site-packages\django\core\
handlers\base.py", line 115, in _get_response
response = self.process_exception_by_middleware(e, request)
File "C:\Users\lgcge\Documents\copypostgis\venv\lib\site-packages\django\core\
handlers\base.py", line 113, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "C:\Users\lgcge\Documents\copypostgis\venv\lib\site-packages\django\contr
ib\admin\options.py", line 606, in wrapper
return self.admin_site.admin_view(view)(*args, **kwargs)
File "C:\Users\lgcge\Documents\copypostgis\venv\lib\site-packages\django\utils
\decorators.py", line 142, in _wrapped_view
response = view_func(request, *args, **kwargs)
File "C:\Users\lgcge\Documents\copypostgis\venv\lib\site-packages\django\views
\decorators\cache.py", line 44, in _wrapped_view_func
response = view_func(request, *args, **kwargs)
File "C:\Users\lgcge\Documents\copypostgis\venv\lib\site-packages\django\contr
ib\admin\sites.py", line 223, in inner
return view(request, *args, **kwargs)
File "C:\Users\lgcge\Documents\copypostgis\venv\lib\site-packages\django\contr
ib\admin\options.py", line 1634, in add_view
return self.changeform_view(request, None, form_url, extra_context)
File "C:\Users\lgcge\Documents\copypostgis\venv\lib\site-packages\django\utils
\decorators.py", line 45, in _wrapper
return bound_method(*args, **kwargs)
File "C:\Users\lgcge\Documents\copypostgis\venv\lib\site-packages\django\utils
\decorators.py", line 142, in _wrapped_view
response = view_func(request, *args, **kwargs)
File "C:\Users\lgcge\Documents\copypostgis\venv\lib\site-packages\django\contr
ib\admin\options.py", line 1522, in changeform_view
return self._changeform_view(request, object_id, form_url, extra_context)
File "C:\Users\lgcge\Documents\copypostgis\venv\lib\site-packages\django\contr
ib\admin\options.py", line 1554, in _changeform_view
form_validated = form.is_valid()
File "C:\Users\lgcge\Documents\copypostgis\venv\lib\site-packages\django\forms
\forms.py", line 185, in is_valid
return self.is_bound and not self.errors
File "C:\Users\lgcge\Documents\copypostgis\venv\lib\site-packages\django\forms
\forms.py", line 180, in errors
self.full_clean()
File "C:\Users\lgcge\Documents\copypostgis\venv\lib\site-packages\django\forms
\forms.py", line 381, in full_clean
self._clean_fields()
File "C:\Users\lgcge\Documents\copypostgis\venv\lib\site-packages\django\forms
\forms.py", line 399, in _clean_fields
value = field.clean(value)
File "C:\Users\lgcge\Documents\copypostgis\venv\lib\site-packages\django\contr
ib\gis\forms\fields.py", line 79, in clean
geom.transform(self.srid)
File "C:\Users\lgcge\Documents\copypostgis\venv\lib\site-packages\django\contr
ib\gis\geos\geometry.py", line 471, in transform
g = gdal.OGRGeometry(self._ogr_ptr(), srid)
File "C:\Users\lgcge\Documents\copypostgis\venv\lib\site-packages\django\contr
ib\gis\gdal\geometries.py", line 115, in __init__
self.srs = srs
File "C:\Users\lgcge\Documents\copypostgis\venv\lib\site-packages\django\contr
ib\gis\gdal\geometries.py", line 284, in _set_srs
sr = SpatialReference(srs)
File "C:\Users\lgcge\Documents\copypostgis\venv\lib\site-packages\django\contr
ib\gis\gdal\srs.py", line 92, in __init__
self.import_epsg(srs_input)
File "C:\Users\lgcge\Documents\copypostgis\venv\lib\site-packages\django\contr
ib\gis\gdal\srs.py", line 277, in import_epsg
capi.from_epsg(self.ptr, epsg)
File "C:\Users\lgcge\Documents\copypostgis\venv\lib\site-packages\django\contr
ib\gis\gdal\prototypes\errcheck.py", line 118, in check_errcode
check_err(result, cpl=cpl)
File "C:\Users\lgcge\Documents\copypostgis\venv\lib\site-packages\django\contr
ib\gis\gdal\error.py", line 59, in check_err
raise e(msg)
django.contrib.gis.gdal.error.GDALException: OGR failure.
[12/Apr/2019 14:39:45] "POST /admin/trial2/shop/add/ HTTP/1.1" 500 160518
admin.py :
from django import forms
from django.contrib.gis import admin
from django.contrib.gis.db import models
from django.contrib.gis.admin import OSMGeoAdmin
from .models import Shop
#admin.register(Shop)
class ShopAdmin(OSMGeoAdmin):
list_display = ('name', 'location')
model.py
from __future__ import unicode_literals
from django.contrib.gis.db import models
from django.contrib.gis.geos import Point
class Shop(models.Model):
name = models.CharField(max_length=100)
location = models.PointField()
address = models.CharField(max_length=100)
city = models.CharField(max_length=50)
Apologies for copy/pasting and only slightly modifying #Kiwi's answer. I tried adding a comment to the original answer but I could not add formatted code.
It seems now that you cannot install the 64 bit version of OSGEO4W any more. I installed the only version available here and modified #Kiwi's answer to reflect the lack of 64 bit. I'm on Windows 10. All works fine with the below.
import platform
import environ
WINDOWS = platform.system() == "Windows"
if WINDOWS:
# the below needs to change for linux
GDAL_LIBRARY_PATH = r'C:\OSGeo4W\bin\gdal303.dll'
GEOS_LIBRARY_PATH = r'C:\OSGeo4W\bin\geos_c.dll'
OSGEO4W = r"C:\OSGeo4W"
os.environ['OSGEO4W_ROOT'] = OSGEO4W
os.environ['GDAL_DATA'] = "C:\Program Files\GDAL\gdal-data" # OSGEO4W + r"\share\gdal"
os.environ['PROJ_LIB'] = OSGEO4W + r"\share\proj"
os.environ['PATH'] = OSGEO4W + r"\bin;" + os.environ['PATH']
if os.name == 'nt':
import platform
OSGEO4W = r"C:\OSGeo4W"
if '64' in platform.architecture()[0]:
OSGEO4W += "64"
assert os.path.isdir(OSGEO4W), "Directory does not exist: " + OSGEO4W
os.environ['OSGEO4W_ROOT'] = OSGEO4W
os.environ['GDAL_DATA'] = "C:\Program Files\GDAL\gdal-data" #OSGEO4W + r"\share\gdal"
os.environ['PROJ_LIB'] = OSGEO4W + r"\share\proj"
os.environ['PATH'] = OSGEO4W + r"\bin;" + os.environ['PATH']
the issue was that this code enforces the paths defined in it over geodjango, so even if you set system variables, still this code would dominate over the system variables.
If anyone has issue with OGR don't hesitate to comment
I had the same problem and this is how I was able to solve it.
first check your GDAL version and GEOS
by running following scripts
gdal-config --version
and
geos-config --version
If the versions of GDAL and GEOS do not match, you will need to either install a compatible version of GEOS or update GDAL to a version that is compatible with the version of GEOS you have installed.
For example if you have GEOS version 3.8.0 or above you will need to upgrade GDAL >= 3.1.2

Hyperopt mongotrials issue with Pickle: AttributeError: 'module' object has no attribute

I'm trying to use Hyperopt parallel search with MongoDB, and encountered some issues with Mongotrials, which have been discussed here. I've tried all their methods, and I am still unable to find solutions to my specific problem. The specific model I'm trying to minimize is RadomForestRegressor from sklearn.
I've followed this tutorial. And I'm able to print out the calculated "fmin" with no issue.
Here are my steps so far:
1) Activate a virtual environment called "tensorflow" (I've installed all my libraries there)
2) Start MongoDB:
(tensorflow) bash-3.2$ mongod --dbpath . --port 1234 --directoryperdb --journal --nohttpinterface
3) Initiate workers:
(tensorflow) bash-3.2$ hyperopt-mongo-worker --mongo=localhost:1234/foo_db --poll-interval=0.1
4) Run my python code, and my python code is as follows:
import numpy as np
import pandas as pd
from sklearn.metrics import mean_absolute_error
from hyperopt import hp, fmin, tpe, STATUS_OK, Trials
from hyperopt.mongoexp import MongoTrials
# Preprocessing data
train_xg = pd.read_csv('train.csv')
n_train = len(train_xg)
print "Whole data set size: ", n_train
# Creating columns for features, and categorical features
features_col = [x for x in train_xg.columns if x not in ['id', 'loss', 'log_loss']]
cat_features_col = [x for x in train_xg.select_dtypes(include=['object']).columns if x not in ['id', 'loss', 'log_loss']]
for c in range(len(cat_features_col)):
train_xg[cat_features_col[c]] = train_xg[cat_features_col[c]].astype('category').cat.codes
# Use this to train random forest regressor
train_xg_x = np.array(train_xg[features_col])
train_xg_y = np.array(train_xg['loss'])
space_rf = { 'min_samples_leaf': hp.choice('min_samples_leaf', range(1,100)) }
trials = MongoTrials('mongo://localhost:1234/foo_db/jobs', exp_key='exp1')
def minMe(params):
# Hyperopt tuning for hyperparameters
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import RandomForestRegressor
from hyperopt import STATUS_OK
try:
import dill as pickle
print('Went with dill')
except ImportError:
import pickle
def hyperopt_rf(params):
rf = RandomForestRegressor(**params)
return cross_val_score(rf, train_xg_x, train_xg_y).mean()
acc = hyperopt_rf(params)
print 'new acc:', acc, 'params: ', params
return {'loss': -acc, 'status': STATUS_OK}
best = fmin(fn=minMe, space=space_rf, trials=trials, algo=tpe.suggest, max_evals=100)
print "Best: ", best
5) After I run the above Python code, I get the following errors:
INFO:hyperopt.mongoexp:Error while unpickling. Try installing dill via "pip install dill" for enhanced pickling support.
INFO:hyperopt.mongoexp:job exception: 'module' object has no attribute 'minMe'
Traceback (most recent call last):
File "/Users/WernerChao/tensorflow/bin/hyperopt-mongo-worker", line 6, in <module>
sys.exit(hyperopt.mongoexp.main_worker())
File "/Users/WernerChao/tensorflow/lib/python2.7/site-packages/hyperopt/mongoexp.py", line 1302, in main_worker
return main_worker_helper(options, args)
File "/Users/WernerChao/tensorflow/lib/python2.7/site-packages/hyperopt/mongoexp.py", line 1249, in main_worker_helper
mworker.run_one(reserve_timeout=float(options.reserve_timeout))
File "/Users/WernerChao/tensorflow/lib/python2.7/site-packages/hyperopt/mongoexp.py", line 1064, in run_one
domain = pickle.loads(blob)
AttributeError: 'module' object has no attribute 'minMe'
INFO:hyperopt.mongoexp:PROTOCOL mongo
INFO:hyperopt.mongoexp:USERNAME None
INFO:hyperopt.mongoexp:HOSTNAME localhost
INFO:hyperopt.mongoexp:PORT 1234
INFO:hyperopt.mongoexp:PATH /foo_db/jobs
INFO:hyperopt.mongoexp:DB foo_db
INFO:hyperopt.mongoexp:COLLECTION jobs
INFO:hyperopt.mongoexp:PASS None
INFO:hyperopt.mongoexp:Error while unpickling. Try installing dill via "pip install dill" for enhanced pickling support.
INFO:hyperopt.mongoexp:job exception: 'module' object has no attribute 'minMe'
Traceback (most recent call last):
File "/Users/WernerChao/tensorflow/bin/hyperopt-mongo-worker", line 6, in <module>
sys.exit(hyperopt.mongoexp.main_worker())
File "/Users/WernerChao/tensorflow/lib/python2.7/site-packages/hyperopt/mongoexp.py", line 1302, in main_worker
return main_worker_helper(options, args)
File "/Users/WernerChao/tensorflow/lib/python2.7/site-packages/hyperopt/mongoexp.py", line 1249, in main_worker_helper
mworker.run_one(reserve_timeout=float(options.reserve_timeout))
File "/Users/WernerChao/tensorflow/lib/python2.7/site-packages/hyperopt/mongoexp.py", line 1064, in run_one
domain = pickle.loads(blob)
AttributeError: 'module' object has no attribute 'minMe'
INFO:hyperopt.mongoexp:PROTOCOL mongo
INFO:hyperopt.mongoexp:USERNAME None
INFO:hyperopt.mongoexp:HOSTNAME localhost
INFO:hyperopt.mongoexp:PORT 1234
INFO:hyperopt.mongoexp:PATH /foo_db/jobs
INFO:hyperopt.mongoexp:DB foo_db
INFO:hyperopt.mongoexp:COLLECTION jobs
INFO:hyperopt.mongoexp:PASS None
INFO:hyperopt.mongoexp:Error while unpickling. Try installing dill via "pip install dill" for enhanced pickling support.
INFO:hyperopt.mongoexp:job exception: 'module' object has no attribute 'minMe'
Traceback (most recent call last):
File "/Users/WernerChao/tensorflow/bin/hyperopt-mongo-worker", line 6, in <module>
sys.exit(hyperopt.mongoexp.main_worker())
File "/Users/WernerChao/tensorflow/lib/python2.7/site-packages/hyperopt/mongoexp.py", line 1302, in main_worker
return main_worker_helper(options, args)
File "/Users/WernerChao/tensorflow/lib/python2.7/site-packages/hyperopt/mongoexp.py", line 1249, in main_worker_helper
mworker.run_one(reserve_timeout=float(options.reserve_timeout))
File "/Users/WernerChao/tensorflow/lib/python2.7/site-packages/hyperopt/mongoexp.py", line 1064, in run_one
domain = pickle.loads(blob)
AttributeError: 'module' object has no attribute 'minMe'
INFO:hyperopt.mongoexp:PROTOCOL mongo
INFO:hyperopt.mongoexp:USERNAME None
INFO:hyperopt.mongoexp:HOSTNAME localhost
INFO:hyperopt.mongoexp:PORT 1234
INFO:hyperopt.mongoexp:PATH /foo_db/jobs
INFO:hyperopt.mongoexp:DB foo_db
INFO:hyperopt.mongoexp:COLLECTION jobs
INFO:hyperopt.mongoexp:PASS None
INFO:hyperopt.mongoexp:no job found, sleeping for 0.7s
INFO:hyperopt.mongoexp:Error while unpickling. Try installing dill via "pip install dill" for enhanced pickling support.
INFO:hyperopt.mongoexp:job exception: 'module' object has no attribute 'minMe'
Traceback (most recent call last):
File "/Users/WernerChao/tensorflow/bin/hyperopt-mongo-worker", line 6, in <module>
sys.exit(hyperopt.mongoexp.main_worker())
File "/Users/WernerChao/tensorflow/lib/python2.7/site-packages/hyperopt/mongoexp.py", line 1302, in main_worker
return main_worker_helper(options, args)
File "/Users/WernerChao/tensorflow/lib/python2.7/site-packages/hyperopt/mongoexp.py", line 1249, in main_worker_helper
mworker.run_one(reserve_timeout=float(options.reserve_timeout))
File "/Users/WernerChao/tensorflow/lib/python2.7/site-packages/hyperopt/mongoexp.py", line 1064, in run_one
domain = pickle.loads(blob)
AttributeError: 'module' object has no attribute 'minMe'
INFO:hyperopt.mongoexp:exiting with N=9223372036854775803 after 4 consecutive exceptions
6) Then Mongo workers would shut off.
Things I've tried:
install "dill" as the error suggested -> didn't work
Put global imports into the objective function so it can pickle -> didn't work
Put try except with "dill" or "pickle" as import -> didn't work
Does anyone have similar issues? I'm running out of ideas to try, and have been working on this for 2 days in vain. I think I am missing something really simple here, just can't seem to find it.
What am I missing?
Any suggestion is welcomed please!
Had the same problem in python 3.5. Installing Dill didn't help, nor dir setting workdir in MongoTrials or hyperopt-mongo-worker cli. hyperopt-mongo-worker doesn't seem to have access to __main__ where the function was defined:
AttributeError: Can't get attribute 'minMe' on <module '__main__' from ...hyperopt-mongo-worker
As #jaikumarm suggested, I circumvented the problem by writing a module file with all the required functions. However, instead of soft-linking it into the bin directory, I extended the PYTHONPATH before running hyperopt-mongo-worker:
export PYTHONPATH="${PYTHONPATH}:<dir_with_the_module.py>"
hyperopt-mongo-worker ...
That way, the hyperopt-monogo-worker is able to import the module containing minMe.
I fought with this for several days before coming up with a workable solution. there are two problems:
1. the mongo worker spawns off a separate process to run the optimizer so any context from your original python file is lost and unavailable for this new process.
2. the imports on this new process happen in the context of the hyperopt-mongo-worker scipy, which is in your case will be /Users/WernerChao/tensorflow/bin/.
So my solution is to make this new optimizer function completely self sufficient
optimizer.py
import numpy as np
import pandas as pd
from sklearn.metrics import mean_absolute_error
# Preprocessing data
train_xg = pd.read_csv('train.csv')
n_train = len(train_xg)
print "Whole data set size: ", n_train
# Creating columns for features, and categorical features
features_col = [x for x in train_xg.columns if x not in ['id', 'loss', 'log_loss']]
cat_features_col = [x for x in train_xg.select_dtypes(include=['object']).columns if x not in ['id', 'loss', 'log_loss']]
for c in range(len(cat_features_col)):
train_xg[cat_features_col[c]] = train_xg[cat_features_col[c]].astype('category').cat.codes
# Use this to train random forest regressor
train_xg_x = np.array(train_xg[features_col])
train_xg_y = np.array(train_xg['loss'])
def minMe(params):
# Hyperopt tuning for hyperparameters
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import RandomForestRegressor
from hyperopt import STATUS_OK
try:
import dill as pickle
print('Went with dill')
except ImportError:
import pickle
def hyperopt_rf(params):
rf = RandomForestRegressor(**params)
return cross_val_score(rf, train_xg_x, train_xg_y).mean()
acc = hyperopt_rf(params)
print 'new acc:', acc, 'params: ', params
return {'loss': -acc, 'status': STATUS_OK}
wrapper.py
from hyperopt import hp, fmin, tpe, STATUS_OK, Trials
from hyperopt.mongoexp import MongoTrials
import optimizer
space_rf = { 'min_samples_leaf': hp.choice('min_samples_leaf', range(1,100)) }
best = fmin(fn=optimizer.minMe, space=space_rf, trials=trials, algo=tpe.suggest, max_evals=100)
print "Best: ", best
trials = MongoTrials('mongo://localhost:1234/foo_db/jobs', exp_key='exp1')
Once you have this code link the optimizer.py to the bin folder
ln -s /Users/WernerChao/Git/test/optimizer.py /Users/WernerChao/tensorflow/bin/
now run the wrapper.py and then the mongo worker it should be able to import the optimizer from its local context and run the minMe function.
Try to install Dill in the Python environment of your tensorflow (or possibly the worker):
/Users/WernerChao/tensorflow/lib/python2.7/site-packages/hyperopt
Your aim is to get rid of the hyperopt error message:
hyperopt.mongoexp:Error while unpickling. Try installing dill via "pip install dill" for enhanced pickling support.
This is because the Python by default cannot marshal a function. It requires dill library to extend Python's pickling module for serialising/de-serialising Python objects. In your case, it failed to serialise your function minMe().
I made a separate file which calculates the loss and copied it to /anaconda2/bin/
and
/anaconda2/lib/python2.7/site-packages/hyperopt
it is working fine.
This was my Traceback
Traceback (most recent call last):
File "/home/greatskull/anaconda2/bin/hyperopt-mongo-worker", line 6, in <module>
sys.exit(hyperopt.mongoexp.main_worker())
File "/home/greatskull/anaconda2/lib/python2.7/site-packages/hyperopt/mongoexp.py", line 1302, in main_worker
return main_worker_helper(options, args)
File "/home/greatskull/anaconda2/lib/python2.7/site-packages/hyperopt/mongoexp.py", line 1249, in main_worker_helper
mworker.run_one(reserve_timeout=float(options.reserve_timeout))
File "/home/greatskull/anaconda2/lib/python2.7/site-packages/hyperopt/mongoexp.py", line 1073, in run_one
with temp_dir(workdir, erase_created_workdir), working_dir(workdir):
File "/home/greatskull/anaconda2/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/home/greatskull/anaconda2/lib/python2.7/site-packages/hyperopt/utils.py", line 229, in temp_dir
os.makedirs(dir)
File "/home/greatskull/anaconda2/lib/python2.7/os.py", line 150, in makedirs
makedirs(head, mode)
File "/home/greatskull/anaconda2/lib/python2.7/os.py", line 157, in makedirs
mkdir(name, mode)

Issue with multiline TokenInputTransformer in IPython extension

I am trying to use a TokenInputTransformer within an IPython extension module, but it seems that there is something wrong with the standard implementation of token transformers with multiline input. Consider the following minimal extension:
from IPython.core.inputtransformer import TokenInputTransformer
#TokenInputTransformer.wrap
def test_transformer(tokens):
return tokens
def load_ipython_extension(ip):
for s in (ip.input_splitter, ip.input_transformer_manager):
s.python_line_transforms.extend([test_transformer()])
print "Test activated"
When I load the extension in IPython 1.1.0 I get a non-handled exception with multiline input:
In [1]: %load_ext test
Test activated
In [2]: abs(
...: 2
...: )
Traceback (most recent call last):
File "/Applications/anaconda/bin/ipython", line 6, in <module>
sys.exit(start_ipython())
File "/Applications/anaconda/lib/python2.7/site-packages/IPython/__init__.py", line 118, in start_ipython
return launch_new_instance(argv=argv, **kwargs)
File "/Applications/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 545, in launch_instance
app.start()
File "/Applications/anaconda/lib/python2.7/site-packages/IPython/terminal/ipapp.py", line 362, in start
self.shell.mainloop()
File "/Applications/anaconda/lib/python2.7/site-packages/IPython/terminal/interactiveshell.py", line 436, in mainloop
self.interact(display_banner=display_banner)
File "/Applications/anaconda/lib/python2.7/site-packages/IPython/terminal/interactiveshell.py", line 548, in interact
self.input_splitter.push(line)
File "/Applications/anaconda/lib/python2.7/site-packages/IPython/core/inputsplitter.py", line 620, in push
out = self.push_line(line)
File "/Applications/anaconda/lib/python2.7/site-packages/IPython/core/inputsplitter.py", line 655, in push_line
line = transformer.push(line)
File "/Applications/anaconda/lib/python2.7/site-packages/IPython/core/inputtransformer.py", line 152, in push
return self.output(tokens)
File "/Applications/anaconda/lib/python2.7/site-packages/IPython/core/inputtransformer.py", line 157, in output
return untokenize(self.func(tokens)).rstrip('\n')
File "/Applications/anaconda/lib/python2.7/site-packages/IPython/utils/_tokenize_py2.py", line 276, in untokenize
return ut.untokenize(iterable)
File "/Applications/anaconda/lib/python2.7/site-packages/IPython/utils/_tokenize_py2.py", line 214, in untokenize
self.add_whitespace(start)
File "/Applications/anaconda/lib/python2.7/site-packages/IPython/utils/_tokenize_py2.py", line 199, in add_whitespace
assert row >= self.prev_row
AssertionError
If you suspect this is an IPython bug, please report it at:
https://github.com/ipython/ipython/issues
or send an email to the mailing list at ipython-dev#scipy.org
You can print a more detailed traceback right now with "%tb", or use "%debug"
to interactively debug it.
Extra-detailed tracebacks for bug-reporting purposes can be enabled via:
%config Application.verbose_crash=True
Am I doing something wrong or is it really an IPython bug?
It is really an IPython bug, I think. Specifically, the way we handle tokenize fails when an expression involving brackets (()[]{}) is spread over more than one line. I'm trying to work out what we can do about this.
Kinda late answer but, I was trying to use it in my own extension and just had the same problem. I've solved it by simply removing NL from the list (it's not the same as NEWLINE token which end statement), NL token only appears inside [], (), {} so it should be safely removable.
from tokenize import NL
#TokenInputTransformer.wrap
def mat_transformer(tokens):
tokens = list(filter(lambda t: t.type != NL, tokens))
return tokens
If you are looking for full example, I've posted my goofy code there: https://github.com/Quinzel/pymat