i have problem with qualifiers in SPARQL.
I have this query:
SELECT ?title ?item ?date ?place WHERE {
SERVICE wikibase:label { bd:serviceParam wikibase:language "en". }
?item wdt:P161 wd:Q38111.
?item wdt:P1476 ?title.
?item wdt:P577 ?date.
# how add ?place, aka place of publication in P577 of current movie
}
This query shows me movies, where is included Leonardo diCaprio as actor. I want to add another column named "place", which means "place of publication". This place of publication is qualifier of property P577 (date of publication of movie).
Have anybody clue how to do it.
Thanks for any advices.
Applying #AKSW's comment to the query is a pretty trivial exercise. Did you try to do this yourself?
SELECT ?title ?item ?date ?place WHERE {
SERVICE wikibase:label { bd:serviceParam wikibase:language "en". }
?item wdt:P161 wd:Q38111.
?item wdt:P1476 ?title.
# ?item wdt:P577 ?date.
# how add ?place, aka place of publication in P577 of current movie
?item p:P577 ?statement.
?statement ps:P577 ?date.
?statement pq:P291 ?place
}
And to get the placename for each ?place --
SELECT ?title ?item ?date ?place ?placeLabel WHERE {
SERVICE wikibase:label { bd:serviceParam wikibase:language "en". }
?item wdt:P161 wd:Q38111.
?item wdt:P1476 ?title.
# ?item wdt:P577 ?date.
# how add ?place, aka place of publication in P577 of current movie
?item p:P577 ?statement.
?statement ps:P577 ?date.
?statement pq:P291 ?place
}
Related
With Terraform 0.12, I am creating a static web site in an S3 bucket:
...
resource "aws_s3_bucket" "www" {
bucket = "example.com"
acl = "public-read"
policy = <<-POLICY
{
"Version": "2012-10-17",
"Statement": [{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": ["s3:GetObject"],
"Resource": ["arn:aws:s3:::example.com/*"]
}]
}
POLICY
website {
index_document = "index.html"
error_document = "404.html"
}
tags = {
Environment = var.environment
Terraform = "true"
}
}
resource "aws_route53_zone" "main" {
name = "example.com"
tags = {
Environment = var.environment
Terraform = "true"
}
}
resource "aws_route53_record" "main-ns" {
zone_id = aws_route53_zone.main.zone_id
name = "example.com"
type = "A"
alias {
name = aws_s3_bucket.www.website_endpoint
zone_id = aws_route53_zone.main.zone_id
evaluate_target_health = false
}
}
I get the error:
Error: [ERR]: Error building changeset: InvalidChangeBatch:
[Tried to create an alias that targets example.com.s3-website-us-west-2.amazonaws.com., type A in zone Z1P...9HY, but the alias target name does not lie within the target zone,
Tried to create an alias that targets example.com.s3-website-us-west-2.amazonaws.com., type A in zone Z1P...9HY, but that target was not found]
status code: 400, request id: 35...bc
on main.tf line 132, in resource "aws_route53_record" "main-ns":
132: resource "aws_route53_record" "main-ns" {
What is wrong?
The zone_id within alias is the S3 bucket zone ID, not the Route 53 zone ID. The correct aws_route53_record resource is:
resource "aws_route53_record" "main-ns" {
zone_id = aws_route53_zone.main.zone_id
name = "example.com"
type = "A"
alias {
name = aws_s3_bucket.www.website_endpoint
zone_id = aws_s3_bucket.www.hosted_zone_id # Corrected
evaluate_target_health = false
}
}
Here is an example for CloudFront. The variables are:
base_url = example.com
cloudfront_distribution = "EXXREDACTEDXXX"
domain_names = ["example.com", "www.example.com"]
The Terraform code is:
data "aws_route53_zone" "this" {
name = var.base_url
}
data "aws_cloudfront_distribution" "this" {
id = var.cloudfront_distribution
}
resource "aws_route53_record" "this" {
for_each = toset(var.domain_names)
zone_id = data.aws_route53_zone.this.zone_id
name = each.value
type = "A"
alias {
name = data.aws_cloudfront_distribution.this.domain_name
zone_id = data.aws_cloudfront_distribution.this.hosted_zone_id
evaluate_target_health = false
}
}
Many users specify CloudFront zone_id = "Z2FDTNDATAQYW2" because it's always Z2FDTNDATAQYW2...until some day maybe it isn't. I like to avoid the literal string by computing it using data source aws_cloudfront_distribution.
For anyone like me that came here from Google in hope to find the syntax for the CloudFormation and YML, Here is how you can achieve it for your sub-domains.
Here we add a DNS record into the Route53 and redirect all the subnets of example.com to this ALB:
AlbDnsRecord:
Type: "AWS::Route53::RecordSet"
DependsOn: [ALB_LOGICAL_ID]
Properties:
HostedZoneName: "example.com."
Type: "A"
Name: "*.example.com."
AliasTarget:
DNSName: !GetAtt [ALB_LOGICAL_ID].DNSName
EvaluateTargetHealth: False
HostedZoneId: !GetAtt [ALB_LOGICAL_ID].CanonicalHostedZoneID
Comment: "A record for Stages ALB"
My mistakes was:
not adding . at the end of my HostedZoneName
under AliasTarget.HostedZoneId ID is al uppercase in the end of CanonicalHostedZoneID
replace the [ALB_LOGICAL_ID] with the actual name of your ALB, for me it was like: ALBStages.DNSName
You should have the zone in your Route53.
So for us all the below addresses will come to this ALB:
dev01.example.com
dev01api.example.com
dev02.example.com
dev02api.example.com
qa01.example.com
qa01api.example.com
qa02.example.com
qa02api.example.com
uat.example.com
uatapi.example.com
Is there a way to return Entities of kind that have no descendants in Google Cloud Datastore?
If your question is if you can retrieve an entity that has no descendants, then yes. You can retrieve any entity through their key (or through a query).
However, if you intend to run a query that retrieves all the child-less entities, this will not be possible. The ancestry information is stored in the descendant entities, so you should recover all the ancestor keys for all the entities (through a projection query), store all the keys for their ancestors, and then run a query for all the entities checking those that are not ancestors to any entity.
Using curl and jq in a shell, it could be something like the following:
export ancestors=$(gcurl -s -H'content-type:application/json' "https://datastore.googleapis.com/v1/projects/$(gcloud config get-value project):runQuery?fields=batch%2FentityResults%2Fentity%2Fkey%2Fpath" -d"{
\"partitionId\": {
\"projectId\": \"$(gcloud config get-value project)\",
\"namespaceId\": \"namespace_id\"
},
\"query\": {
\"kind\": [
{
\"name\": \"descendant_entity_name\"
}
],
\"projection\": [
{
\"property\": {
\"name\": \"__key__\"
}
}
]
}
}" | jq '[.batch.entityResults[].entity.key.path | select(length > 1 ) | .[-2].id]')
gcurl -H'content-type:application/json' "https://datastore.googleapis.com/v1/projects/$(gcloud config get-value project):runQuery?fields=batch%2FentityResults%2Fentity%2Fkey%2Fpath" -d"{
\"partitionId\": {
\"projectId\": \"$(gcloud config get-value project)\",
\"namespaceId\": \"namespace_id\"
},
\"query\": {
\"kind\": [
{
\"name\": \"ancestor_entity_name\"
}
],
\"projection\": [
{
\"property\": {
\"name\": \"__key__\"
}
}
]
}
}" | jq '.batch.entityResults[].entity.key.path[-1].id | select(inside(env.ancestors)|not)'
I have following problem which I explain on an example:
I want to retrieve the object Berlin from the triplet Germany - capital - object.
I must use labels, because those are inputs in my program.
Following query gives me back the propertyLabel capital:
prefix wdt: <http://www.wikidata.org/prop/direct/>
prefix wikibase: <http://wikiba.se/ontology#>
prefix bd: <http://www.bigdata.com/rdf#>
prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#>
SELECT DISTINCT ?propertyLabel WHERE {
?property a wikibase:Property .
?property rdfs:label "capital"#en
SERVICE wikibase:label { bd:serviceParam wikibase:language "en" .}
}
Following query with the label Germany and URI P36 (capital) gives me back the desired information Berlin :
prefix wdt: <http://www.wikidata.org/prop/direct/>
prefix wikibase: <http://wikiba.se/ontology#>
prefix bd: <http://www.bigdata.com/rdf#>
prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#>
SELECT DISTINCT ?objectLabel WHERE {
?subject wdt:P36 ?object .
?subject rdfs:label "Germany"#en .
SERVICE wikibase:label { bd:serviceParam wikibase:language "en" .}
}
But I want to use P36 as a label. I tried various ways with two Selects or a Union, but i get thousands of results or none. The query should look like this (although this one doesn't work):
prefix wdt: <http://www.wikidata.org/prop/direct/>
prefix wikibase: <http://wikiba.se/ontology#>
prefix bd: <http://www.bigdata.com/rdf#>
prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#>
SELECT DISTINCT ?objectLabel WHERE {
?subject ?property ?object .
?subject rdfs:label "Germany"#en .
?property a wikibase:Property .
?property rdfs:label "capital"#en
SERVICE wikibase:label { bd:serviceParam wikibase:language "en" .}
}
The query as already mentioned above has to return Berlin and nothing else. Thanks in advance.
the problem is that your property lookup for the label "capital" returns http://www.wikidata.org/entity/P36 but the instance data uses http://www.wikidata.org/prop/direct/P36. A workaround might be:
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
PREFIX wikibase: <http://wikiba.se/ontology#>
PREFIX wd: <http://www.wikidata.org/entity/>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
SELECT DISTINCT ?objectLabel WHERE {
?subject ?property ?object ;
rdfs:label "Germany"#en .
?p a wikibase:Property ;
rdfs:label "capital"#en
BIND(STRAFTER(STR(?p), STR(wd:)) as ?p_localname)
BIND(IRI(CONCAT(STR(wdt:), ?p_localname)) as ?property)
SERVICE wikibase:label { bd:serviceParam wikibase:language "en" .}
}
On one service I'm trying to do something like:
Organization.withCriteria {
eq( "active", true )
eq( "location.region", region)
}
which is working but when calling the method inside a unit test I get:
java.lang.NullPointerException
at org.grails.datastore.mapping.keyvalue.mapping.config.KeyValuePersistentEntity.getPropertyByName(KeyValuePersistentEntity.java:75)
at grails.gorm.CriteriaBuilder.validatePropertyName(CriteriaBuilder.java:954)
at grails.gorm.CriteriaBuilder.eq(CriteriaBuilder.java:435)
at com.apposit.terra.connect.service.OrganizationService.getAllOrganizationsInZone_closure9(OrganizationService.groovy:322)
at grails.gorm.CriteriaBuilder.invokeClosureNode(CriteriaBuilder.java:980)
at grails.gorm.CriteriaBuilder.invokeMethod(CriteriaBuilder.java:314)
at org.grails.datastore.gorm.GormStaticApi.withCriteria_closure11(GormStaticApi.groovy:305)
at org.grails.datastore.mapping.core.DatastoreUtils.execute(DatastoreUtils.java:302)
at org.grails.datastore.gorm.AbstractDatastoreApi.execute(AbstractDatastoreApi.groovy:37)
at org.grails.datastore.gorm.GormStaticApi.withCriteria(GormStaticApi.groovy:304)
Should be:
Organization.withCriteria {
eq( "active", true )
location {
eq( "region", region)
}
}
If not please file a JIRA at http://jira.grails.org/browse/GPMONGODB
We've got a problem. We cannot execute a SQL SELECT on below structure using sequelize.
This is our structure, see below image.
We are trying to execute this sequelize query:
query = {
'attributes': [models.sequelize.fn('count', models.sequelize.col('sent_deals.user_id'))],
order: [['alert.keywords', ' DESC']],
attributes: ['alert.keywords'],
include: [
{
model: models.alert,
attributes: ['keywords']
}, {
where: {
'products.sent_deal.created_at': {
between: [moment(start).startOf('day').format('YYYY-MM-DD HH:mm'), moment(end).endOf('day').format('YYYY-MM-DD HH:mm')]
}
},
model: models.product,
attributes: ['id']
}
]
};
models.userAlerts.findAll(query)
But then we receive the following error: "error: missing FROM-clause entry for table "alert"", because we're trying to select the attribute alert.keywords also outside the include[], but still it DID work in the previous version of Sequelize JS! And now we cannot order by alert anymore.... :( It always returns ONE alert because of the "belongs to" connection, so theoretically it has to work.
My guess is because the query doesn't execute a direct join, but does a SELECT FROM on a subquery, see below:
SELECT "userAlerts".*
,"alert"."id" AS "alert.id"
,"alert"."keywords" AS "alert.keywords"
,"products"."id" AS "products.id"
,"products.sent_deal"."user_alert_id" AS "products.sent_deal.user_alert_id"
,"products.sent_deal"."deal_id" AS "products.sent_deal.deal_id"
,"products.sent_deal"."created_at" AS "products.sent_deal.created_at"
FROM (
SELECT "userAlerts"."id"
,"userAlerts"."user_id"
,"userAlerts"."alert_id"
,"userAlerts"."activationToken"
,"userAlerts"."activatedAt"
,"userAlerts"."createdAt"
,"userAlerts"."updatedAt"
FROM "user_alerts" AS "userAlerts"
WHERE (
SELECT "products.sent_deal"."user_alert_id"
FROM "sent_deals" AS "products.sent_deal"
INNER JOIN "deals" AS "products" ON "products"."id" = "products.sent_deal"."deal_id"
WHERE "userAlerts"."id" = "products.sent_deal"."user_alert_id"
AND (
"products.sent_deal"."created_at" BETWEEN '2014-11-17 00:00'
AND '2014-12-02 23:59'
) LIMIT 1
) IS NOT NULL
ORDER BY "alert"."keywords" DESC LIMIT 1
) AS "userAlerts"
LEFT JOIN "alerts" AS "alert" ON "alert"."id" = "userAlerts"."alert_id"
INNER JOIN (
"sent_deals" AS "products.sent_deal" INNER JOIN "deals" AS "products" ON "products"."id" = "products.sent_deal"."deal_id"
) ON "userAlerts"."id" = "products.sent_deal"."user_alert_id"
AND (
"products.sent_deal"."created_at" BETWEEN '2014-11-17 00:00'
AND '2014-12-02 23:59'
)
ORDER BY "alert"."keywords" DESC
Having the keywords attribute on the include, should be enough, there is no need to add it to the outer query
attributes: [models.sequelize.fn('count', models.sequelize.col('sent_deals.user_id'))],
order: [['alert.keywords', ' DESC']],
attributes: ['alert.keywords'], <-- This line is superflous
include: [
{
model: models.alert,
attributes: ['keywords']
}, {
where: {
'products.sent_deal.created_at': {
between: [moment(start).startOf('day').format('YYYY-MM-DD HH:mm'), moment(end).endOf('day').format('YYYY-MM-DD HH:mm')]
}
},
model: models.product,
attributes: ['id']
}
]
Shouldn't you be including SentDeal in order to query products.sent_deal? This issue is probably better suite for an issue on the sequelize GH