I have a MongoDB collection with legacy UUID _id fields.
I can retrieve individual objects with pymongo by passing UUID objects:
from uuid import UUID
from pymongo import MongoClient
client = MongoClient('localhost', 27017)
db = client[DB_NAME]
collection = db[COLLECTION_NAME]
one_ok = collection.find_one(
filter={'_id': UUID('eedacde6-e241-4f37-a57d-c4e8eb6601a7')})
one_not_ok = collection.find_one(
filter={'_id': 'eedacde6-e241-4f37-a57d-c4e8eb6601a7'})
print(one_ok is None) # prints False
print(one_not_ok is None) # prints True
However I can't seem to get individual resources using Eve. I get a 404 for:
http://localhost:5000/collection1/eedacde6-e241-4f37-a57d-c4e8eb6601b7
The list returned by http://localhost:5000/collection1 mentions a link collection1/eedacde6-e241-4f37-a57d-c4e8eb6601b7 that seems to match the URL that returns a 404:
<resource href="collection1" title="collection1">
<link rel="parent" href="/" title="home"/>
<_meta>
<max_results>25</max_results>
<page>1</page>
<total>2</total>
</_meta>
<resource href="collection1/eedacde6-e241-4f37-a57d-c4e8eb6601b7" title="Collection1">
<_created>Thu, 01 Jan 1970 00:00:00 GMT</_created>
<_etag>9718572fe2d17b12358691602f6bd5f1ee345b06</_etag>
<_id>eedacde6-e241-4f37-a57d-c4e8eb6601b7</_id>
<_updated>Thu, 01 Jan 1970 00:00:00 GMT</_updated>
</resource>
<resource href="collection1/98c67ae6-ee7b-4bd3-9ec1-56b70fa2a9c7" title="Collection1">
<_created>Thu, 01 Jan 1970 00:00:00 GMT</_created>
<_etag>75c2451d19750e1007274315672bf893f9654273</_etag>
<_id>98c67ae6-ee7b-4bd3-9ec1-56b70fa2a9c7</_id>
<_updated>Thu, 01 Jan 1970 00:00:00 GMT</_updated>
</resource>
</resource>
This is my DOMAIN variable in settings.py. I set it up that way after following http://python-eve.org/tutorials/custom_idfields.html .
DOMAIN = {
'collection1': {
'item_url': 'regex("[a-f0-9]{8}-?[a-f0-9]{4}-?4[a-f0-9]{3}-?[89ab][a-f0-9]{3}-?[a-f0-9]{12}")',
'schema': {
'_id': {'type': 'uuid'},
},
}
}
I suspect this doesn't work because the UUIDs are 'Legacy UUIDs', but I might be wrong.
Any ideas on how I can make URL such as http://localhost:5000/collection1/eedacde6-e241-4f37-a57d-c4e8eb6601b7 return individual resources?
Workaround: hack Eve
I am able to make it work by hacking Eve. The code below must be added for each HTTP verb (GET, POST, etc).
In eve/methods/get.py
def getitem_internal(resource, **lookup):
"""
<...>
"""
req = parse_request(resource)
resource_def = config.DOMAIN[resource]
embedded_fields = resolve_embedded_fields(resource, req)
#### hack ###
if '_id' in lookup:
if 'schema' in resource_def and '_id' in resource_def['schema']:
if resource_def['schema']['_id'].get('type') == 'uuid':
lookup['_id'] = UUID(lookup['_id'])
#### end of hack ###
soft_delete_enabled = config.DOMAIN[resource]['soft_delete']
if soft_delete_enabled:
# GET requests should always fetch soft deleted documents from the db
# They are handled and included in 404 responses below.
req.show_deleted = True
document = app.data.find_one(resource, req, **lookup)
<...>
Related
I'm a newby using MongoDB and Eve; I have a problem setting up a dynamic lookup filters.
My use case is to include in a pre_GET only documents whose _id is included in a list (array) present in the profile of the (authenticated) user.
Now, when this list is static, it's working fine like this:
class BCryptAuth(BasicAuth):
def check_auth(self, username, password, allowed_roles, resource, method):
# use Eve's own db driver; no additional connections/resources are used
accounts = app.data.driver.db['people']
account = accounts.find_one({'lastname': username})
return account and \
bcrypt.hashpw(password, account['password']) == account['password']
# Hook Test
def pre_GET(resource, request, lookup):
lookup["_id"] = {'$in': ['5a19808f65a98412dba4b683', '5a1b06d365a98412a4445fa0'] }
if __name__ == '__main__':
app = Eve(auth=BCryptAuth)
# Hook Test
app.on_pre_GET += pre_GET
# End Hook Test
My need is to substitute the line
lookup["_id"] = {'$in': ['5a19808f65a98412dba4b683', '5a1b06d365a98412a4445fa0'] }
with the content of the array "canAccess" present in the authenticated user profile (a document in the collection people) - something like (pseudocode)
SELECT the content of array canAccess where lastname = authenticated_user().
This is the document representing the user:
{
"_updated": "Sat, 25 Nov 2017 14:39:11 GMT",
"firstname": "barack",
"lastname": "obama",
"role": [
"copy",
"author"
],
"canAccess": [
"5a1b06d365a98412a4445fa0",
"5a1c5c9265a984120caf7e0b"
],
"_created": "Sat, 25 Nov 2017 14:39:11 GMT",
"_id": "5a19808f65a98412dba4b683",
"_etag": "758056ac49d156526858bd3a8b4922d65231942f"
}
Any help would be greatly appreciated.
Thanks
Giulio
You could use flask g to store the user_id in the per-request application context, so you can retrieve it inside the hook.
In check_auth:
from flask import g
def check_auth(self, username, password, allowed_roles, resource, method):
people = app.data.driver.db['people']
user = people.find_one({'lastname': username})
g.user_id = user['_id']
return account and \
bcrypt.hashpw(password, account['password']) == account['password']
Then you can access your data from the array inside the hook by doing the same done in the check_auth to retrieve the account, roughly like this:
from flask import g
def pre_GET(resource, request, lookup):
user_id = getattr(g, 'user_id', None)
people = app.data.driver.db['people']
user = accounts.find_one({'_id': user_id})
lookup["_id"] = {'$in': user['canAcess']}
I am planning to use S3 Cloudstorage in IBM Bluemix but then one strange thing I found is that there is no way to add the custom META-DATA to the objects which are stored in S3 bucket.
Is there a way I can add custom Meta-Data to the objects and if yes then can you please advise on how we can add it and access it.?
Thanks for pointing out a hole in the documentation!
Custom metadata is defined by passing a x-amz-meta-{key} header with a {value} value. As an example request:
PUT /{bucket-name}/{object-name} HTTP/1.1
Authorization: {authorization-string}
x-amz-meta-foo: bar
x-amz-date: 20160825T183001Z
x-amz-content-sha256:{hashed-body}
Content-Type: text/plain; charset=utf-8
Host: s3-api.us-geo.objectstorage.softlayer.net
Content-Length: 18
{
"foo": "bar"
}
A HEAD request to check the metadata would look like:
HEAD /{bucket-name}/{object-name} HTTP/1.1
Authorization: {authorization-string}
x-amz-date: 20160825T183244Z
Host: s3-api.us-geo.objectstorage.softlayer.net
And respond with:
HTTP/1.1 200 OK
Date: Thu, 25 Aug 2016 18:32:44 GMT
X-Clv-Request-Id: da214d69-1999-4461-a130-81ba33c484a6
Accept-Ranges: bytes
Server: Cleversafe/3.9.1.102
X-Clv-S3-Version: 2.5
ETag: {MD5-hash}
Content-Type: text/plain; charset=UTF-8
x-amz-meta-foo: bar
Last-Modified: Thu, 25 Aug 2016 17:49:06 GMT
Content-Length: 18
Using the CLI, the syntax would be:
$ aws --endpoint-url=https://{endpoint} s3 cp ~/new-file s3://bucket-1/ --metadata foo=bar
Hope that helps!
This is possible. I am using this every day . Adding meta data then sending the meta data in database by doing cron calls.
Here is a small example of python script to create/add/change metadata for a a list object :
import sys
import os
import boto3
import pprint
from boto3 import client
from botocore.utils import fix_s3_host
param_1= YOUR_ACCESS_KEY
param_2= YOUR_SECRETE_KEY
param_3= YOUR_END_POINT
param_4= YOUR_BUCKET
#Create the S3 client
s3ressource = client(
service_name='s3',
endpoint_url= param_3,
aws_access_key_id= param_1,
aws_secret_access_key=param_2,
use_ssl=True,
)
# Building a list of object per bucket
def BuildObjectListPerBucket (variablebucket):
global listofObjectstobeanalyzed
listofObjectstobeanalyzed = []
extensions = ['.jpg','.png']
for key in s3ressource.list_objects(Bucket=variablebucket)["Contents"]:
#print (key ['Key'])
onemoreObject=key['Key']
if onemoreObject.endswith(tuple(extensions)):
listofObjectstobeanalyzed.append(onemoreObject)
else :
s3ressource.delete_object(Bucket=variablebucket,Key=onemoreObject)
return listofObjectstobeanalyzed
# for a given existing object, create metadata
def createmetdata(bucketname,objectname):
s3ressource.upload_file(objectname, bucketname, objectname, ExtraArgs={"Metadata": {"metadata1":"ImageName","metadata2":"ImagePROPERTIES" ,"metadata3":"ImageCREATIONDATE"}})
# for a given existing object, add new metadata
def ADDmetadata(bucketname,objectname):
s3_object = s3ressource.get_object(Bucket=bucketname, Key=objectname)
k = s3ressource.head_object(Bucket = bucketname, Key = objectname)
m = k["Metadata"]
m["new_metadata"] = "ImageNEWMETADATA"
s3ressource.copy_object(Bucket = bucketname, Key = objectname, CopySource = bucketname + '/' + objectname, Metadata = m, MetadataDirective='REPLACE')
# for a given existing object, update a metadata with new value
def CHANGEmetadata(bucketname,objectname):
s3_object = s3ressource.get_object(Bucket=bucketname, Key=objectname)
k = s3ressource.head_object(Bucket = bucketname, Key = objectname)
m = k["Metadata"]
m.update({'watson_visual_rec_dic':'ImageCREATIONDATEEEEEEEEEEEEEEEEEEEEEEEEEE'})
s3ressource.copy_object(Bucket = bucketname, Key = objectname, CopySource = bucketname + '/' + objectname, Metadata = m, MetadataDirective='REPLACE')
def readmetadata (bucketname,objectname):
ALLDATAOFOBJECT = s3ressource.get_object(Bucket=bucketname, Key=objectname)
ALLDATAOFOBJECTMETADATA=ALLDATAOFOBJECT['Metadata']
print ALLDATAOFOBJECTMETADATA
# create the list of object on a per bucket basis
BuildObjectListPerBucket (variablebucket)
# Call functions to see the results
for objectitem in listofObjectstobeanalyzed:
readmetadata(param_4,objectitem)
createmetdata(param_4,objectitem)
readmetadata(param_4,objectitem)
ADDmetadata(param_4,objectitem)
readmetadata(param_4,objectitem)
CHANGEmetadata(param_4,objectitem)
readmetadata(param_4,objectitem)
I have a few REST endpoints and few [asmx/svc] endpoints.
Some of them are GET and the others are POST operations.
I am trying to put together a quick and dirty , repeatable healthcheck sequence for finding if all the endpoints are responsive or if any are down.
Essentially either get a 200 or 201 and report error if otherwise.
What is the easiest way to do this?
SOAPUI use internally apache http-client 4.1.1 version, you can use it inside a groovy script testStep to perform your checks.
Add a groovy script testStep to your testCase an inside use the follow code; which basically try to perform a GET against a list of URLs, if its returns http-status 200 or 201 its considered that is working, if http-status 405 (Method not allowed) is returned then it tries with POST and perform the same status code check, otherwise it's considered down.
Note that some services can be running however can return for example 400 (BAD request) if the request is not correct, so think about if you need to rethink the way you want perform a check or add some other status codes to consider if the server is running correctly.
import org.apache.http.HttpEntity
import org.apache.http.HttpResponse
import org.apache.http.client.methods.HttpGet
import org.apache.http.client.methods.HttpPost
import org.apache.http.impl.client.DefaultHttpClient
// urls to check
def urls = ['http://google.es','http://stackoverflow.com']
// apache http-client to use in closue
DefaultHttpClient httpclient = new DefaultHttpClient()
// util function to get the response
def getStatus = { httpMethod ->
HttpResponse response = httpclient.execute(httpMethod)
// consume the entity to avoid error with http-client
if(response.getEntity() != null) {
response.getEntity().consumeContent();
}
return response.getStatusLine().getStatusCode()
}
HttpGet httpget;
HttpPost httppost;
// finAll urls that are working
def urlsWorking = urls.findAll { url ->
log.info "try GET for $url"
httpget = new HttpGet(url)
def status = getStatus(httpget)
// if status are 200 or 201 it's correct
if(status in [200,201]){
log.info "$url is OK"
return true
// if GET is not allowed try with POST
}else if(status == 405){
log.info "try POST for $url"
httppost = new HttpPost(url)
status = getStatus(httpget)
// if status are 200 or 201 it's correct
if(status in [200,201]){
log.info "$url is OK"
return true
}
log.info "$url is NOT working status code: $status"
return false
}else{
log.info "$url is NOT working status code: $status"
return false
}
}
// close connection to release resources
httpclient.getConnectionManager().shutdown()
log.info "URLS WORKING:" + urlsWorking
This scripts logs:
Tue Nov 03 22:37:59 CET 2015:INFO:try GET for http://google.es
Tue Nov 03 22:38:00 CET 2015:INFO:http://google.es is OK
Tue Nov 03 22:38:00 CET 2015:INFO:try GET for http://stackoverflow.com
Tue Nov 03 22:38:03 CET 2015:INFO:http://stackoverflow.com is OK
Tue Nov 03 22:38:03 CET 2015:INFO:URLS WORKING:[http://google.es, http://stackoverflow.com]
Hope it helps,
Is there a way to retrieve data stored as page metadata (Page Properties) stored in a separate JCR node via the QueryBuilder API?
Example:
The search results should include the data under ~/article-1/jcr:content/thumbnail. However, the only results I am getting are data under the ~article-1/jcr:content/content (a parsys included on the template).
An example query:
http://localhost:4502/bin/querybuilder.json?p.hits=full&path=/content/path/to/articles
Results in (snippet):
{
"jcr:path":"/content/path/to/articles/article-1",
"jcr:createdBy":"admin",
"jcr:created":"Tue Dec 03 2013 16:26:51 GMT-0500",
"jcr:primaryType":"cq:Page"
},
{
"jcr:path":"/content/path/to/articles/article-1/jcr:content",
"sling:resourceType":"myapp/components/global/page/productdetail",
"jcr:lockIsDeep":true,
"jcr:uuid":"4ddebe08-82e1-44e9-9197-4241dca65bdf",
"jcr:title":"Article 1",
"jcr:mixinTypes":[
"mix:lockable",
"mix:versionable"
],
"jcr:created":"Tue Dec 03 2013 16:26:51 GMT-0500",
"jcr:baseVersion":"24cabbda-1e56-4d37-bfba-d0d52aba1c00",
"cq:lastReplicationAction":"Activate",
"jcr:isCheckedOut":true,
"cq:template":"/apps/myapp/templates/global/productdetail",
"cq:lastModifiedBy":"admin",
"jcr:primaryType":"cq:PageContent",
"jcr:predecessors":[
"24cabbda-1e56-4d37-bfba-d0d52aba1c00"
],
"cq:tags":[
"mysite:mytag"
],
"jcr:createdBy":"admin",
"jcr:versionHistory":"9dcd41d4-2e10-4d52-b0c0-1ea20e102e68",
"cq:lastReplicatedBy":"admin",
"cq:lastModified":"Mon Dec 09 2013 17:57:59 GMT-0500",
"cq:lastReplicated":"Mon Dec 16 2013 11:42:54 GMT-0500",
"jcr:lockOwner":"admin"
}
Search configuration is the out-of-the-box default.
EDIT: Data is returning in JSON, however, is not accessable in the API:
Result:
{
"success":true,
"results":2,
"total":2,
"offset":0,
"hits":[
{
"jcr:path":"/content/path/to/articles/article-a",
"jcr:createdBy":"admin",
"jcr:created":"Tue Dec 03 2013 16:27:01 GMT-0500",
"jcr:primaryType":"cq:Page",
"jcr:content":{
"sling:resourceType":"path/to/components/global/page/productdetail",
"_comment":"// ***SNIP*** //",
"thumbnail":{
"jcr:lastModifiedBy":"admin",
"imageRotate":"0",
"jcr:lastModified":"Wed Dec 04 2013 12:10:47 GMT-0500",
"jcr:primaryType":"nt:unstructured"
}
}
},
{
"jcr:path":"/content/path/to/articles/article-1",
"jcr:createdBy":"admin",
"jcr:created":"Tue Dec 03 2013 16:26:51 GMT-0500",
"jcr:primaryType":"cq:Page",
"jcr:content":{
"sling:resourceType":"path/to/components/global/page/productdetail",
"_comment":"// ***SNIP*** //",
"thumbnail":{
"jcr:lastModifiedBy":"admin",
"imageRotate":"0",
"fileReference":"/content/dam/path/to/IBMDemo/apparel/women/wsh005_shoes/WSH005_0533_is_main.jpg",
"jcr:lastModified":"Mon Dec 09 2013 17:57:58 GMT-0500",
"jcr:primaryType":"nt:unstructured"
}
}
}
]
}
Implementation code:
searchCriteria.put("path", path);
searchCriteria.put("type", "cq:Page");
searchCriteria.put("p.offset", offset.toString());
searchCriteria.put("p.limit", limit.toString());
searchCriteria.put("p.hits", "full");
searchCriteria.put("p.properties", "thumbnail");
searchCriteria.put("p.nodedepth", "2");
PredicateGroup predicateGroup = PredicateGroup.create(searchCriteria);
Query query = queryBuilder.createQuery(predicateGroup, session);
SearchResult result = query.getResult();
for (Hit hit : result.getHits()) {
try {
ValueMap properties = hit.getProperties();
VFSearchResult res = new VFSearchResult();
res.setUrl(hit.getPath());
res.setImageUrl((String)properties.get("thumbnail"));
res.setTags((String[])properties.get("cq:tags"));
res.setTeaserText((String)properties.get("teaserText"));
res.setTitle((String)properties.get("jcr:title"));
searchResults.add(res);
} catch (RepositoryException rex) {
logger.debug(String.format("could not retrieve node properties: %1s", rex));
}
}
After setting the path in the query, then set one or more property filters, such as in this example:
type=cq:Page
path=/content/path/to/articles
property=jcr:content/thumbnail
property.operation=exists
p.hits=selective
p.properties=jcr:content/thumbnail someSpecificThumbnailPropertyToRetrieve
p.limit=100000
You can set those on /libs/cq/search/content/querydebug.html and then also use that to get the JSON URL for the same query.
Check this out for some other examples: http://dev.day.com/docs/en/cq/5-5/dam/customizing_and_extendingcq5dam/query_builder.html
You could use CURL to retrieve node properties in the HTML/ JSON/ XML format. All you have to do is install download CURL and run your curl commands from your terminal from the same directory as the curl's .exe file.
HTML example:
C:\Users\****\Desktop>curl -u username:password http://localhost:4502/content/geometrixx-media/en/gadgets/leaps-in-ai.html
JSON example:
**C:\Users\****\Desktop>**curl -u username:password http://localhost:4502/content/geometrixx-media/en/gadgets/leaps-in-ai.html.tidy.infinity.json
Note: infinity in the above query ensures you get every property of every node under your specified path recursively
I write some code like this to getting cell value in google Spreadsheet
this code exactly work
$query = new Zend_Gdata_Spreadsheets_DocumentQuery();
$query->setSpreadsheetKey($this->spreadsheetId);
$feed = $this->spreadsheetService->getWorksheetFeed($query);
var_dump($feed);
but I need value of last modified this spreadsheet
this value are protected in $feed object :
["Last-modified"]=>
string(29) "Wed, 18 Sep 2013 08:58:44 GMT"
is there any way to get key "last-modified" of Goolgle Doc?
AND How i can access it via zend V.1?