convert mongodb query (unicode) to json using json_util - mongodb

from bson.json_util import dumps
def json_response(response):
return {"response":dumps(response,ensure_ascii=False).encode("utf8")
,"headers":{"Content-type":"text/json"}}
This problem is making me crazy. It returns an error randomly, and I can't find the solution.
/core/handlers/wsgi.py", line 38, in __call__,
output = lookup_view(req),
File "auth/decorator.py", line 8, in wrap,
return fn(req,*args,**kwargs),
File "auth/decorator.py", line 21, in wrap,
return fn(req,*args,**kwargs),
File "contrib/admin/views.py", line 67, in submit_base_premission,
return json_response({"baseperm":baseperm,"Meta":{"gmsg":u"...","type":201}}),
File "render/render_response.py", line 85, in json_response,
return {"response":dumps(response,ensure_ascii=False).encode("utf8"),
File "/usr/local/lib/python2.7/dist-packages/bson/json_util.py", line 116, in dumps,
return json.dumps(_json_convert(obj), *args, **kwargs),
File "/usr/lib/python2.7/json/__init__.py", line 238, in dumps, referer:
**kw).encode(obj),
File "/usr/lib/python2.7/json/encoder.py", line 201, in encode,
chunks = self.iterencode(o, _one_shot=True),
File "/usr/lib/python2.7/json/encoder.py", line 264, in iterencode,
return _iterencode(o, 0),
File "/usr/lib/python2.7/json/encoder.py", line 178, in default,
raise TypeError(repr(o) + " is not JSON serializable"),
TypeError: ObjectId('51f7dcee95113b7a48e974fe') is not JSON serializable,
baseperm is a pymongo Cursor, it returns this error randomly and that is where I have the problem.
It seems that it doesn't detect objectid sometimes and doesn't convert it to str so json raises an error on dumps.

Check the version of the pymongo driver, if it is under version 2.4.2+ then you may need to update it. Before that version the __str__ method of ObjectId was handled incorrectly for 2.x versions of python, check the repo: github, ObjectId.__str__ should return str in 2.x.
To check the pymongo driver version, type in the python shell:
import pymongo
print(pymongo.version)
UPDATE
I suppose you have tested both environments with the same dataset, so give a try to upgrade python 2.7.3 to 2.7.5.
Else try to iterate through the cursor and construct the list before giving it to json_response() i.e.:
baseperm = list(baseperm) #now baseperm is a list of the documents
...
my_response['baseperm'] = baseperm
my_response['Meta'] = ...
...
return json_response(my_response)

I report this problem on mongodb issue tracker
https://jira.mongodb.org/browse/PYTHON-548
answer:
You said this only happens occasionally? The only thing I can think of that might be related is mod_wsgi spawning sub interpreters. In PyMongo that tends to cause problems with the C extensions encoding python dicts to BSON. In your case this seems to be happening after the BSON documents are decoded to python dicts. It looks like isinstance is failing to match ObjectId in json_util.default(). PYTHON-539 seemed to be a similar problem related to some package miss configuration in the user's environment.
There could be a fairly large performance hit, but could you try running PyMongo without C extensions to see if that solves the problem?
You can read about the mod_wsgi issue here:
http://api.mongodb.org/python/current/faq.html#does-pymongo-work-with-mod-wsgi

Related

How to fix data length error in postgresql and django

The project is in django and works pretty well with SQLite but when migrate to PostgreSQL and try to register a user shows this error
File "C:\Users\liz\developer\env\lib\site-packages\django\db\backends\utils.py", line 85, in _execute
return self.cursor.execute(sql, params)
django.db.utils.DataError: the value is too long for varchar type varying(30)
Already change the slug fields to 255 but error still there
Most of the answers say that must change the slugfields to 255, I looking for in vscode for the max_length=30 and I found a couple of fields with that length, I change all the fields to 255 and it worked, if you ever face this change all the fields in all models to 255 to make it works and then modify as you need

How to convert Blender blend (or obj) file to .h file?

I would like to convert a 3d model (.obj like blender) to .h file. there is a tool at github but when I run it, I got a message error :
tool:
https://github.com/HBehrens/obj2opengl/
my commend line:
C:\Users\***>perl C:\Users\***\Desktop\vuforia\obj2opengl.pl C:\
Users\***\Desktop\vuforia\cc.obj
cc.obj is an export of blender software .
error :
Can't use 'defined(#array)' (Maybe you should just omit the defined()?) at C:\Users\***\Desktop\vuforia\obj2opengl.pl line 118.
line 118 :
if(defined(#center)) { //line 118
$xcen = $center[0];
$ycen = $center[1];
$zcen = $center[2];
}
I don't know where is the problem.
my OS is windows 64 . I installed perl before .
Can't use 'defined(#array)' (Maybe you should just omit the defined()?)
This tells you that the syntax defined(#array) is not valid, and even gives a hint. All you need to do is remove the defined(). Your code would then read
if(#center) {
$xcen = $center[0];
$ycen = $center[1];
$zcen = $center[2];
}
The if evaluation forces the array into scalar context, which makes it return its number of elements. That's probably 3, or 0. If it's 0 then it's a false value and the block is skipped. 3 on the other hand is a true value and the block will be executed.
The defined(#array) syntax was deprecated from Perl.
Use of defined on aggregates (hashes and arrays) is deprecated. It used to report whether memory for that aggregate had ever been allocated. This behavior may disappear in future versions of Perl.
The version of Perl you installed is newer than the version the author of your script used, and this is a rare occasion of an incompatible change.

TypeError in subprocess command for youtube-dl

I'm trying to write a very simple script which passes in a .csv file and runs youtube-dl (and specified args) for each link in the file- saving the files to a certain directory.
The format of the csv is Artist;Title;Link. And the script:
import pandas as pd
import subprocess
def get_music(csv):
df = pd.read_csv(csv, sep=";", skipinitialspace=True)
for _, row in df.iterrows():
subprocess.call(['youtube-dl', "x",
"--output ~/mydir/%(title)s.%(ext)s",
"--extract-audio", "--youtube-skip-dash-manifest",
"--prefer-ffmpeg", "--audio-format", "mp3"], row.Link)
get_music("CSV.csv")
When I run this however, I get the following error:
"raise TypeError("bufsize must be an integer")
TypeError: bufsize must be an integer"
I'm afraid I don't understand how the bufsize is getting passed something other than an integer. Simply put, what am I doing wrong, and how should I fix it?
Currently, your second argument to subprocess.call, which specifies the bufsize, is row.Link, which seems to be the URL you want to download. Instead of "x", pass in the actual link. Also, there is no option "--output ~/mydir/%(title)s.%(ext)s", as option names do not contain spaces. Most likely, you want
subprocess.call(['youtube-dl', row.Link,
"--output", "~/mydir/%(title)s.%(ext)s",
"--extract-audio", "--youtube-skip-dash-manifest",
"--prefer-ffmpeg", "--audio-format", "mp3"])

Julia MongoDB save raw variable

I would like to save some variables in Julia to a database using MongoDB. I ran into a problem when using the following function:
insert(client, "myDB.rawInfo", { "raw" => status})
This works when saving simple information, like strings or ints. However, status is of type Array{Any,1}. When saving, I get the following error message:
`build` has no method matching build(::Ptr{None}, ::Dict{String,Any})
while loading In[256], in expression starting on line 18
in append at C:\Users\Guido\.julia\v0.3\Mongo\src\BSON.jl:225
in append at C:\Users\Guido\.julia\v0.3\Mongo\src\BSON.jl:231 (repeats 2 times)
in build at C:\Users\Guido\.julia\v0.3\Mongo\src\BSON.jl:207
in BSONObject at C:\Users\Guido\.julia\v0.3\Mongo\src\BSON.jl:82
in find_one at C:\Users\Guido\.julia\v0.3\Mongo\src\Mongo.jl:30
in find_one at C:\Users\Guido\.julia\v0.3\Mongo\src\Mongo.jl:34
Can anyone help me?
There seems to be something wrong with the MongoDB package for Julia which causes the malfunction of nested variables. A fix has been made a while ago and can be applied by manually reinstalling this version of MongoDB for Julia (first uninstall, than manually reinstall): https://github.com/rened/Mongo.jl

Can't insert into MongoDB due to AutoReconnect

Background:
I've got a python script using pymongo that pulls some XML data, parses it into an array of dictionaries called 'all_orders'. I then try to insert it into the collection "orders" and I invariably get this exception. I am reasonably certain that my array of dictionaries is correct because when the list is small it tends to work (I think). I've also found that 8 out of the ~1300 documents I tried to insert into the collection worked.
Question:
Do you know what causes this AutoReconnect(str(e)) exception? Do you know how to work around or avoid this issue?
Error Trace:
File "mongovol.py", line 152, in get_reports
orders.insert(all_orders)
File "/Users/ashutosh/hrksandbox/lumoback-garden2/venv/lib/python2.7/site-packages/pymongo/collection.py", line 359, in insert
continue_on_error, self.__uuid_subtype), safe)
File "/Users/ashutosh/hrksandbox/lumoback-garden2/venv/lib/python2.7/site-packages/pymongo/mongo_client.py", line 853, in _send_message
raise AutoReconnect(str(e))