How to get multiple ressources using one URI? - tastypie

When I issue this curl command:
curl http://localhost/refeq/v1/vehiclesequipements/?format=json -s | jq .
I get:
{
"objects":[
{
"vehicle_id": 9759,
"resource_uri": "/refeq/v1/vehiclesequipements/9759/",
"noArticle": "",
"name": "OBCIV_TFT2",
"mac": "84:7e:40:e9:f2:1e"
},
{
"vehicle_id": 9899,
"resource_uri": "/refeq/v1/vehiclesequipements/9899/",
"noArticle": "",
"name": "FILM_FRONT",
"mac": "00:40:9d:a2:36:fe"
},
{
"vehicle_id": 9899,
"resource_uri": "/refeq/v1/vehiclesequipements/9899/",
"noArticle": "",
"name": "OBCIV_TFT1",
"mac": "84:7e:40:ea:a4:36"
}
],
"meta": {
"total_count": 3,
"offset": 0,
"limit": 0
}
}
I would like to get two ressources about the vehicle_id: 9899
to get something like this:
{
"objects":[
{
"vehicle_id": 9899,
"resource_uri": "/refeq/v1/vehiclesequipements/9899/",
"noArticle": "",
"name": "FILM_FRONT",
"mac": "00:40:9d:a2:36:fe"
},
{
"vehicle_id": 9899,
"resource_uri": "/refeq/v1/vehiclesequipements/9899/",
"noArticle": "",
"name": "OBCIV_TFT1",
"mac": "84:7e:40:ea:a4:36"
}
],
"meta": {
"total_count": 2,
"offset": 0,
"limit": 0
}
}
the problem I'm facing is that
curl http://localhost/refeq/v1/vehiclesequipements/9899/?format=json -s
returns:
More than one resource is found at this URI.
How to return multiple ressources having ressource_uri ?
I looked at the doc but I don't understand how to achieve this ..
Here's my api.py
from tastypie.resources import ModelResource
from tastypie.serializers import Serializer
from tastypie.authorization import Authorization
from refeq.models import VehiclesEquipements
USE_LOCAL_TIME = True
class MyDateSerializer(Serializer):
def format_datetime(self, data):
return data.strftime("%Y-%m-%dT%H:%M:%S")
class VehiclesEquipementsResource(ModelResource):
class Meta:
queryset = VehiclesEquipements.objects.all()
resource_name = 'vehiclesequipements'
#filtering = {"vehicle_id":["exact","in"]}
authorization=Authorization()
limit = 0
max_limit = 0
if USE_LOCAL_TIME:
serializer = MyDateSerializer()
and models.py
class VehiclesEquipements(models.Model):
vehicle_id = models.IntegerField(primary_key=True)
noArticle = models.CharField(max_length=30, blank=True)
name = models.CharField(max_length=30, blank=True)
mac = models.CharField(max_length=30, blank=True)
class Meta:
db_table = 'vw_vehicles_equipements'
managed = False
Final note:
I cannot use uri filtering:
curl 'http://dope-apipc01/refeq/v1/vehiclesequipements/?format=json&vehicle_id__exact=9899'
I am newbie to Django and tastypie... Any help is appreciated.

It seems like to access a ressource it has to have a unique resource_uri otherwise exception MultipleObjectsReturned is returned.

Related

How to extract values from json string from api in scala?

I am trying to extract specific value from each json in a response from api.
for example if I have http response is kind of string array as below:
[
{
"trackerType": "WEB",
"id": 1,
"appId": "ap-website",
"host": {
"orgId": "ap",
"displayName": "AP Mart",
"id": "3",
"tenantId": "ap"
}
},
{
"trackerType": "WEB",
"id": 2,
"appId": "test-website",
"host": {
"orgId": "t1",
"tenantId": "trn11"
}
}
]
I wanted to extract or keep only list of values app_id and tenant_id as below:
[
{
"appId": "ap-website",
"tenantId": "ap"
},
{
"appId": "test-website",
"tenantId": "trn11"
}
]
If your HTTP response is quite big and you wouldn't hold it all in memory then consider using IO streams for parsing the body and serialization of the result list.
Below is an example of how it can be done with the dijon library.
Add dependency to your build file:
libraryDependencies += "me.vican.jorge" %% "dijon" % "0.5.0+18-46bbb74d", // Use %%% instead of %% for Scala.js
Import following packages:
import com.github.plokhotnyuk.jsoniter_scala.core._
import dijon._
import scala.language.dynamics._
Parse your input stream transforming it value by value in the callback to the output stream:
val in = new java.io.ByteArrayInputStream(
"""
[
{
"trackerType": "WEB",
"id": 1,
"appId": "ap-website",
"host": {
"orgId": "ap",
"displayName": "AP Mart",
"id": "3",
"tenantId": "ap"
}
},
{
"trackerType": "WEB",
"id": 2,
"appId": "test-website",
"host": {
"orgId": "t1",
"tenantId": "trn11"
}
}
]
""".getBytes("UTF-8"))
val out = new java.io.BufferedOutputStream(System.out)
out.write('[')
scanJsonArrayFromStream[SomeJson](in) {
var writeComma = false
x =>
if (writeComma) out.write(',') else writeComma = true
val json = obj("appId" -> x.appId, "tenantId" -> x.host.tenantId)
writeToStream[SomeJson](json, out)(codec)
true
} (codec)
out.write(']')
out.flush()
You can try it with Scastie here
When using this code in your application, you need to replace the source and destination of input and output streams.
There are other options how to solve your task. Please add more context that will help us in selection of the most simple and efficient solution.
Feel free to comment - I will be happy to help you in tuning the solution to your needs.

obtaining raw user and session data

so I'm really new to working with the Google Analytics API. I have managed to make the request work:
{
"dateRange": {
"startDate": "2021-01-08",
"endDate": "2021-05-05"
},
"activityTypes": [
"GOAL"
],
"user": {
"type": "CLIENT_ID",
"userId": "2147448080.1620199617"
},
"viewId": "1556XXX89"
}
such that I can get back a json format file like:
{
"sessions": [
{
"sessionId": "1620199614",
"deviceCategory": "mobile",
"platform": "Android",
"dataSource": "web",
"activities": [
{
"activityTime": "2021-05-05T07:53:08.366983Z",
"source": "(direct)",
"medium": "(none)",
"channelGrouping": "Direct",
"campaign": "(not set)",
"keyword": "(not set)",
"hostname": "somewebsite.com",
"landingPagePath": "/client/loginorcreate/login",
"activityType": "GOAL",
"customDimension": [
{
"index": 1
},
{
"index": 2
},
{
"index": 3,
"value": "59147"
}
],
"goals": {
"goals": [
{
"goalIndex": 1,
"goalCompletions": "1",
"goalCompletionLocation": "/order/registerorder/postregister.html",
"goalPreviousStep1": "page-z",
"goalPreviousStep2": "page-y",
"goalPreviousStep3": "page-x",
"goalName": "order"
}
]
}
}
],
"sessionDate": "2021-05-05"
}
],
"totalRows": 1,
"sampleRate": 1
}
now, ideally, I would get a different format response, and more importantly on where I don't have to specify each individual client ID. is there such a request format I can build, which would return something like:
clientID1 | activityTime | sessionId | activities
clientID2 | activityTime | sessionId | activities
clientID3 | activityTime | sessionId | activities
thanks!
The google analytics api returns data in the json format that you are seeing currently. How you format that data will be up to you this must however been done locally by you.

How to create Map<String, String> and Map<String, List<String> protobuf scala

I am new to Scala and protobufs. I want to create a object something like this
{
"id": "usr-435-899",
"type": "SALES",
"filters": {
"country": [
"usa",
"germany"
],
"indication": [
"delivery"
]
}
}
So I think I cannot create the POJO like this in protobuf's.
Now I've created different JSON which is
{
"id": "usr-435-899",
"type": "SALES",
"filters": {
"country": {
"value": [
"usa",
"germany"
]
},
"indication": {
"value": [
"delivery"
]
}
}
}
So I've created a proto something like this:
message ListOfValues {
repeated string value = 1;
}
message AuditRequest {
required string id = 1;
required string type = 2;
map<string, ListOfValues> filters = 3;
}
But when I try to hit the api from POSTMAN it says 404 not found
Can anyone tell whats wrong in this? And Can we create proto for first JSON?

Can't get Service Alerts Protobuff to include header_text or description_text using Python gtfs_realtime_pb2 module

We are having difficulty adding header_text and description_text to a Service Alerts protobuff file. We are attempting to match the example shown on this page here.
https://developers.google.com/transit/gtfs-realtime/examples/alerts
Our data starts in the following dictionary:
alerts_dict = {
"header": {
"gtfs_realtime_version": "1",
"timestamp": "1543318671",
"incrementality": "FULL_DATASET"
},
"entity": [{
"497": {
"active_period": [{
"start": 1525320000,
"end": 1546315200
}],
"url": "http://www.capmetro.org/planner",
"effect": 4,
"header_text": "South 183: Airport",
"informed_entity": [{
"route_type": "3",
"route_id": "17",
"trip": "",
"stop_id": "3304"
}, {
"route_type": "3",
"route_id": "350",
"trip": "",
"stop_id": "3304"
}],
"description_text": "Stop closed temporarily",
"cause": 2
},
"460": {
"active_period": [{
"start": 1519876800,
"end": 1546315200
}],
"url": "http://www.capmetro.org/planner",
"effect": 4,
"header_text": "Ave F / Duval Detour",
"informed_entity": [{
"route_type": "3",
"route_id": "7",
"trip": "",
"stop_id": "1167"
}, {
"route_type": "3",
"route_id": "7",
"trip": "",
"stop_id": "1268"
}],
"description_text": "Stop closed temporarily",
"cause": 2
}
}]
}
Our Python code is as follows:
newfeed = gtfs_realtime_pb2.FeedMessage()
newfeedheader = newfeed.header
newfeedheader.gtfs_realtime_version = '2.0'
for alert_id, alert_dict in alerts_dict["entity"][0].iteritems():
print(alert_id)
print(alert_dict)
newentity = newfeed.entity.add()
newalert = newentity.alert
newentity.id = str(alert_id)
newtimerange = newalert.active_period.add()
newtimerange.end = alert_dict['active_period'][0]['end']
newtimerange.start = alert_dict['active_period'][0]['start']
for informed in alert_dict['informed_entity']:
newentityselector = newalert.informed_entity.add()
newentityselector.route_id = informed['route_id']
newentityselector.route_type = int(informed['route_type'])
newentityselector.stop_id = informed['stop_id']
print(alert_dict['description_text'])
newdescription = newalert.header_text
newdescription = alert_dict['description_text']
newalert.cause = alert_dict['cause']
newalert.effect = alert_dict['effect']
pb_feed = newfeed.SerializeToString()
with open("servicealerts.pb", 'wb') as fout:
fout.write(pb_feed)
The frustrating part is that we don't receive any sort of error message. Everything appears to run properly but the resulting pb file doesn't contain the new header_text or description_text items.
We are able to read the pb file using the following code:
feed = gtfs_realtime_pb2.FeedMessage()
response = open("servicealerts.pb")
feed.ParseFromString(response.read())
print(feed)
We truly appreciate any help that anyone can offer in pointing us in the right direction of figuring this out.
I was able to find the answer. This Python Notebook showed that by properly formatting the dictionary the PB could be generated with a few of lines of code.
from google.transit import gtfs_realtime_pb2
from google.protobuf.json_format import MessageToDict
newfeed = gtfs_realtime_pb2.FeedMessage()
ParseDict(alerts_dict, newfeed)
pb_feed = newfeed.SerializeToString()
with open("servicealerts.pb", 'wb') as fout:
fout.write(pb_feed)
All I had to do was format by dictionary properly.
if ALERT_GROUP_ID not in entity_dict.keys():
entity_dict[ALERT_GROUP_ID] = {"id": ALERT_GROUP_ID,
"alert":{
"active_period": [{
"start": int(START_TIME),
"end": int(END_TIME)
}],
"cause": cause_dict.get(CAUSE, ""),
"effect": effect_dict.get(EFFECT),
"url": {
"translation": [{
"text": URL,
"language": "en"
}]
},
"header_text": {
"translation": [{
"text": HEADER_TEXT,
"language": "en"
}]
},
"informed_entity": [{
'route_id': ROUTE_ID,
'route_type': ROUTE_TYPE,
'trip': TRIP,
'stop_id': STOP_ID
}],
"description_text": {
"translation": [{
"text": "Stop closed temporarily",
"language": "en"
}]
},
},
}
# print(entity_dict[ALERT_GROUP_ID]["alert"]['informed_entity'])
else:
entity_dict[ALERT_GROUP_ID]["alert"]['informed_entity'].append({
'route_id': ROUTE_ID,
'route_type': ROUTE_TYPE,
'trip': TRIP,
'stop_id': STOP_ID
})

Fields are empty when doing GET in elastic4s

I'm trying to implement a service in my play2 app that uses elastic4s to get a document by Id.
My document in elasticsearch:
curl -XGET 'http://localhost:9200/test/venues/3659653'
{
"_index": "test",
"_type": "venues",
"_id": "3659653",
"_version": 1,
"found": true,
"_source": {
"id": 3659653,
"name": "Salong Anna och Jag",
"description": "",
"telephoneNumber": "0811111",
"postalCode": "16440",
"streetAddress": "Kistagången 12",
"city": "Kista",
"lastReview": null,
"location": {
"lat": 59.4045675,
"lon": 17.9502138
},
"pictures": [],
"employees": [],
"reviews": [],
"strongTags": [
"skönhet ",
"skönhet ",
"skönhetssalong"
],
"weakTags": [
"Frisörsalong",
"Frisörer"
],
"reviewCount": 0,
"averageGrade": 0,
"roundedGrade": 0,
"recoScore": 0
}
}
My Service:
#Singleton
class VenueSearchService extends ElasticSearchService[IndexableVenue] {
/**
* Elastic search conf
*/
override def path = "test/venues"
def getVenue(companyId: String) = {
val resp = client.execute(
get id companyId from path
).map { response =>
// transform response to IndexableVenue
response
}
resp
}
If I use getFields() on the response object I get an empty object. But if I call response.getSourceAsString I get the document as json:
{
"id": 3659653,
"name": "Salong Anna och Jag ",
"description": "",
"telephoneNumber": "0811111",
"postalCode": "16440",
"streetAddress": "Kistagången 12",
"city": "Kista",
"lastReview": null,
"location": {
"lat": 59.4045675,
"lon": 17.9502138
},
"pictures": [],
"employees": [],
"reviews": [],
"strongTags": [
"skönhet ",
"skönhet ",
"skönhetssalong"
],
"weakTags": [
"Frisörsalong",
"Frisörer"
],
"reviewCount": 0,
"averageGrade": 0,
"roundedGrade": 0,
"recoScore": 0
}
As you can se the get request omits info:
"_index": "test",
"_type": "venues",
"_id": "3659653",
"_version": 1,
"found": true,
"_source": {}
If I try to do a regular search:
def getVenue(companyId: String) = {
val resp = client.execute(
search in "test"->"venues" query s"id:${companyId}"
//get id companyId from path
).map { response =>
Logger.info("response: "+response.toString)
}
resp
}
I get:
{
"took": 2,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 1,
"max_score": 1,
"hits": [
{
"_index": "test",
"_type": "venues",
"_id": "3659653",
"_score": 1,
"_source": {
"id": 3659653,
"name": "Salong Anna och Jag ",
"description": "",
"telephoneNumber": "0811111",
"postalCode": "16440",
"streetAddress": "Kistagången 12",
"city": "Kista",
"lastReview": null,
"location": {
"lat": 59.4045675,
"lon": 17.9502138
},
"pictures": [],
"employees": [],
"reviews": [],
"strongTags": [
"skönhet ",
"skönhet ",
"skönhetssalong"
],
"weakTags": [
"Frisörsalong",
"Frisörer"
],
"reviewCount": 0,
"averageGrade": 0,
"roundedGrade": 0,
"recoScore": 0
}
}
]
}
}
My Index Service:
trait ElasticIndexService [T <: ElasticDocument] {
val clientProvider: ElasticClientProvider
def path: String
def indexInto[T](document: T, id: String)(implicit writes: Writes[T]) : Future[IndexResponse] = {
Logger.debug(s"indexing into $path document: $document")
clientProvider.getClient.execute {
index into path doc JsonSource(document) id id
}
}
}
case class JsonSource[T](document: T)(implicit writes: Writes[T]) extends DocumentSource {
def json: String = {
val js = Json.toJson(document)
Json.stringify(js)
}
}
and indexing:
#Singleton
class VenueIndexService #Inject()(
stuff...) extends ElasticIndexService[IndexableVenue] {
def indexVenue(indexableVenue: IndexableVenue) = {
indexInto(indexableVenue, s"${indexableVenue.id.get}")
}
Why is getFields empty when doing get?
Why is query info left out when doing getSourceAsString in a get request?
Thank you!
What you're hitting in question 1 is that you're not specifying which fields to return. By default ES will return the source and not fields (other than type and _id). See http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-request-fields.html
I've added a test to elastic4s to show how to retrieve fields, see:
https://github.com/sksamuel/elastic4s/blob/master/src%2Ftest%2Fscala%2Fcom%2Fsksamuel%2Felastic4s%2FSearchTest.scala
I am not sure on question 2.
The fields are empty because elasticsearch don't return it.
If you need fields, you must indicate in query what field you need:
this is you search query without field:
search in "test"->"venues" query s"id:${companyId}"
and in this query we indicate which field we want to, in this case 'name' and 'description':
search in "test"->"venues" fields ("name","description") query s"id:${companyId}"
now you can retrieve the fields:
for(x <- response.getHits.hits())
{
println(x.getFields.get("name").getValue)
You found a getSourceAsString in a get request because the parameter _source is to default 'on' and fields is to default 'off'.
I hope this will help you