How can I have multiple API Gateway paths with GET requests in the awsm.json? - aws-cloudformation

I'm trying to create an endpoint with many path parameters:
/api/v1/{option1}
/api/v1/{option1}/{option2}
/api/v1/{option1}/{option2}/{option3}
Using JAWS awsm.json, I want to create GET methods for all 3 routes. How(if possible) can I accomplish this using Serverless Framework?
CF:
{
"lambda": {
"envVars": [],
"deploy": true,
"package": {
"optimize": {
"builder": "browserify",
"minify": true,
"ignore": [],
"exclude": [
"aws-sdk"
],
"includePaths": []
},
"excludePatterns": []
},
"cloudFormation": {
"Description": "",
"Handler": "aws_modules/static/handler.handler",
"MemorySize": 1024,
"Runtime": "nodejs",
"Timeout": 6
}
},
"apiGateway": {
..path => /api/v1/{firstname}..
}
}

atm, there is no way to do this via Serverless Framework.
one thing i found out was that u can omit values in the url so itll be considered blank.
ex:
api/v1/option1//option3
so this considers option2 as blank. so this kinda solves the issue except user would need to add the additional /s

Related

Does WAF need to be included as part of Application Gateway creation?

As part of BCP/Disaster Recover planning, we want to simulate a restoration scenario of the application gateway template we have in bitbucket.
I have the json template as well as parameters file in the repository, but I also see a WAF rule template files for the gateway.
So basically there are 4 files....but the New-AzResourceGroupDeployment only takes in the main template file (-TemplateUri) as well as parameters file (-TemplateParameterUri). so how would i be able to specify the WAF templates as part of the gateway creation?
I do see references to the WAF rules in the main gateway template file but is that enough?
sku information:
"properties": {
"sku": {
"name": "WAF_v2",
"tier": "WAF_v2",
"capacity": 1
You can add WAF policies while creating the WAF v2 Gateway by just adding the below in the Template file in the same way it has been done in this Microsoft Documentation and providing the values as per your requirement :
{
"type": "Microsoft.Network/ApplicationGatewayWebApplicationFirewallPolicies",
"apiVersion": "2020-06-01",
"name": "[variables('AGWafPol01')]",
"location": "[parameters('location')]",
"properties": {
"customRules": [
{
"name": "CustRule01",
"priority": 100,
"ruleType": "MatchRule",
"action": "Block",
"matchConditions": [
{
"matchVariables": [
{
"variableName": "RemoteAddr"
}
],
"operator": "IPMatch",
"negationConditon": true,
"matchValues": [
"10.10.10.0/24"
]
}
]
}
],
"policySettings": {
"requestBodyCheck": true,
"maxRequestBodySizeInKb": 128,
"fileUploadLimitInMb": 100,
"state": "Enabled",
"mode": "Prevention"
},
"managedRules": {
"managedRuleSets": [
{
"ruleSetType": "OWASP",
"ruleSetVersion": "3.1"
}
]
}
}
},

JBPM created case and tasks not visible or accessible

I have succesfully integrated Keycloak into JBPM for user management and can login using keycloak into business central and case management. I have also successfully configured the kie-server using keycloak credentials and can deploy a stripped down version of the IT Orders sample application on the running sample-server kie-server. When I perform a GET/kie-server/services/rest/server/containers I can see my container itorders_1.0.0-SNAPSHOT is up and running in business central and also when I call GET /kie-server/services/rest/server/containers which gives the output below
{
"type": "SUCCESS",
"msg": "List of created containers",
"result": {
"kie-containers": {
"kie-container": [
{
"container-id": "itorders_1.0.0-SNAPSHOT",
"release-id": {
"group-id": "itorders",
"artifact-id": "itorders",
"version": "1.0.0-SNAPSHOT"
},
"resolved-release-id": {
"group-id": "itorders",
"artifact-id": "itorders",
"version": "1.0.0-SNAPSHOT"
},
"status": "STARTED",
"scanner": {
"status": "DISPOSED",
"poll-interval": null
},
"config-items": [
{
"itemName": "KBase",
"itemValue": "",
"itemType": "BPM"
},
{
"itemName": "KSession",
"itemValue": "",
"itemType": "BPM"
},
{
"itemName": "MergeMode",
"itemValue": "MERGE_COLLECTIONS",
"itemType": "BPM"
},
{
"itemName": "RuntimeStrategy",
"itemValue": "PER_CASE",
"itemType": "BPM"
}
],
"messages": [
{
"severity": "INFO",
"timestamp": {
"java.util.Date": 1598900747932
},
"content": [
"Release id successfully updated for container itorders_1.0.0-SNAPSHOT"
]
}
],
"container-alias": "itorders"
}
]
}
}
}
I can get the case definitions using GET /kie-server/services/rest/server/queries/cases
{
"definitions": [
{
"name": "Order for IT hardware",
"id": "itorders.orderhardware",
"version": "1.0",
"case-id-prefix": "IT",
"container-id": "itorders_1.0.0-SNAPSHOT",
"adhoc-fragments": [
{
"name": "Prepare hardware spec",
"type": "HumanTaskNode"
}
],
"roles": {
"owner": 1
},
"milestones": [],
"stages": []
}
]
}
I can then do a POST /kie-server/services/rest/server/containers/itorders_1.0.0-SNAPSHOT/cases/itorders.orderhardware/instances which correctly returns the Case ID of the case created e.g. IT-0000000014. The call returns http status code 201
However when I do a GET /kie-server/services/rest/server/queries/cases/instances there are no instances returned as per below
{
"instances": []
}
When I create a case in the JBPM Case Management showcase I get the green prompt to show the case was successfully created however no open cases appear in the grid even if I refresh the screen.
I can see the process instance associated with the case in the process instances view including the diagram which shows that the "Prepare hardware spec" is active and the current activity. However viewing the tasks associated with the process does not show any tasks. Similarly the task inboxes of the user I am expecting to get claim the task is also empty.
Take note that I am using token based authentication with Keycloak and executed the above rest calls using Postman
Why can I not view the case instance I created? Why can I not view the tasks associated with the process instance?
With this query GET /kie-server/services/rest/server/queries/cases/instances you can see only instances on which is your user setup as potential. Make sure that user used in token is setup as potential owner.

How to measure per user bandwidth usage on google cloud storage?

We want to charge users based on the amount of traffic their data has. Actually the amount of downstream bandwidth their data is consuming.
I have exported google cloud storage access_logs. From the logs, I can count the number of times a file is accessed. (filesize * count will be the bandwidth usage)
But the problem is that this doesn't work well with cached content. My calculated value is much more than the actual usage.
I went with this method because our traffic will be new and won't use the cache, which means that the difference won't matter. But in reality, it seems like it is a real problem.
This is a common use case and I think there should be a better way to solve this problem with google cloud storage.
{
"insertId": "-tohip8e1vmvw",
"logName": "projects/bucket/logs/cloudaudit.googleapis.com%2Fdata_access",
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"authenticationInfo": {
"principalEmail": "firebase-storage#system.gserviceaccount.com"
},
"authorizationInfo": [
{
"granted": true,
"permission": "storage.objects.get",
"resource": "projects/_/bucket/bucket.appspot.com/objects/users/2y7aPImLYeTsCt6X0dwNMlW9K5h1/somefile",
"resourceAttributes": {}
},
{
"granted": true,
"permission": "storage.objects.getIamPolicy",
"resource": "projects/_/bucket/bucket.appspot.com/objects/users/2y7aPImLYeTsCt6X0dwNMlW9K5h1/somefile",
"resourceAttributes": {}
}
],
"methodName": "storage.objects.get",
"requestMetadata": {
"destinationAttributes": {},
"requestAttributes": {
"auth": {},
"time": "2019-07-02T11:58:36.068Z"
}
},
"resourceLocation": {
"currentLocations": [
"eu"
]
},
"resourceName": "projects/_/bucket/bucket.appspot.com/objects/users/2y7aPImLYeTsCt6X0dwNMlW9K5h1/somefile",
"serviceName": "storage.googleapis.com",
"status": {}
},
"receiveTimestamp": "2019-07-02T11:58:36.412798307Z",
"resource": {
"labels": {
"bucket_name": "bucket.appspot.com",
"location": "eu",
"project_id": "project-id"
},
"type": "gcs_bucket"
},
"severity": "INFO",
"timestamp": "2019-07-02T11:58:36.062Z"
}
An entry of the log.
We are using a single bucket for now. Can also use multiple if it helps.
One possibility is to have a separate bucket for each user and get the bucket's bandwidth usage through timeseries api.
The endpoint for this purpose is:
https://cloud.google.com/monitoring/api/ref_v3/rest/v3/projects.timeSeries/list
And following are the parameters to achieve bytes sent for one hour (we can specify time range above 60s) whose sum will be the total bytes sent from the bucket.
{
"dataSets": [
{
"timeSeriesFilter": {
"filter": "metric.type=\"storage.googleapis.com/network/sent_bytes_count\" resource.type=\"gcs_bucket\" resource.label.\"project_id\"=\"<<<< project id here >>>>\" resource.label.\"bucket_name\"=\"<<<< bucket name here >>>>\"",
"perSeriesAligner": "ALIGN_SUM",
"crossSeriesReducer": "REDUCE_SUM",
"secondaryCrossSeriesReducer": "REDUCE_SUM",
"minAlignmentPeriod": "3600s",
"groupByFields": [
"resource.label.\"bucket_name\""
],
"unitOverride": "By"
},
"targetAxis": "Y1",
"plotType": "LINE",
"legendTemplate": "${resource.labels.bucket_name}"
}
],
"options": {
"mode": "COLOR"
},
"constantLines": [],
"timeshiftDuration": "0s",
"y1Axis": {
"label": "y1Axis",
"scale": "LINEAR"
}
}

Can not create new layer (featuretype) in GeoServer using REST API

So I just used 2 working days trying to figure this out. We are automatic rendering process for maps. All the data is given in SQL base and my job is to write "wrapper" so we can implement this in our in-house framework. I managed all but one needed requests.
That request is POST featuretype since this is a way of creating a layer that can later be rendered.
I have all requests saved in postman for pre-testing on example data given by geoserver itself. I can't even get response with status code 201 and always get 500 internal server error. This status is described as possible syntax error in sytax. But I actually just copied and pasted exampled and used geoserver provided data.
This is the requst: http://127.0.0.1:8080/geoserver/rest/workspaces/tiger/datastores/nyc/featuretypes
and its body:
{
"name": "poi",
"nativeName": "poi",
"namespace": {
"name": "tiger",
"href": "http://localhost:8080/geoserver/rest/namespaces/tiger.json"
},
"title": "Manhattan (NY) points of interest",
"abstract": "Points of interest in New York, New York (on Manhattan). One of the attributes contains the name of a file with a picture of the point of interest.",
"keywords": {
"string": [
"poi",
"Manhattan",
"DS_poi",
"points_of_interest",
"sampleKeyword\\#language=ab\\;",
"area of effect\\#language=bg\\;\\#vocabulary=technical\\;",
"Привет\\#language=ru\\;\\#vocabulary=friendly\\;"
]
},
"metadataLinks": {
"metadataLink": [
{
"type": "text/plain",
"metadataType": "FGDC",
"content": "www.google.com"
}
]
},
"dataLinks": {
"org.geoserver.catalog.impl.DataLinkInfoImpl": [
{
"type": "text/plain",
"content": "http://www.google.com"
}
]
},
"nativeCRS": "GEOGCS[\"WGS 84\", \n DATUM[\"World Geodetic System 1984\", \n SPHEROID[\"WGS 84\", 6378137.0, 298.257223563, AUTHORITY[\"EPSG\",\"7030\"]], \n AUTHORITY[\"EPSG\",\"6326\"]], \n PRIMEM[\"Greenwich\", 0.0, AUTHORITY[\"EPSG\",\"8901\"]], \n UNIT[\"degree\", 0.017453292519943295], \n AXIS[\"Geodetic longitude\", EAST], \n AXIS[\"Geodetic latitude\", NORTH], \n AUTHORITY[\"EPSG\",\"4326\"]]",
"srs": "EPSG:4326",
"nativeBoundingBox": {
"minx": -74.0118315772888,
"maxx": -74.00153046439813,
"miny": 40.70754683896324,
"maxy": 40.719885123828675,
"crs": "EPSG:4326"
},
"latLonBoundingBox": {
"minx": -74.0118315772888,
"maxx": -74.00857344353275,
"miny": 40.70754683896324,
"maxy": 40.711945649065406,
"crs": "EPSG:4326"
},
"projectionPolicy": "REPROJECT_TO_DECLARED",
"enabled": true,
"metadata": {
"entry": [
{
"#key": "kml.regionateStrategy",
"$": "external-sorting"
},
{
"#key": "kml.regionateFeatureLimit",
"$": "15"
},
{
"#key": "cacheAgeMax",
"$": "3000"
},
{
"#key": "cachingEnabled",
"$": "true"
},
{
"#key": "kml.regionateAttribute",
"$": "NAME"
},
{
"#key": "indexingEnabled",
"$": "false"
},
{
"#key": "dirName",
"$": "DS_poi_poi"
}
]
},
"store": {
"#class": "dataStore",
"name": "tiger:nyc",
"href": "http://localhost:8080/geoserver/rest/workspaces/tiger/datastores/nyc.json"
},
"cqlFilter": "INCLUDE",
"maxFeatures": 100,
"numDecimals": 6,
"responseSRS": {
"string": [
4326
]
},
"overridingServiceSRS": true,
"skipNumberMatched": true,
"circularArcPresent": true,
"linearizationTolerance": 10,
"attributes": {
"attribute": [
{
"name": "the_geom",
"minOccurs": 0,
"maxOccurs": 1,
"nillable": true,
"binding": "com.vividsolutions.jts.geom.Point"
},
{},
{},
{}
]
}
}
So it is example case and I can't get any useful response from the server. I get the code 500 with body name (the first item in json). Similarly I get same code with body FeatureTypeInfo when trying with xml body(first tag).
I already tried the request in new instance of geoserver in Docker (changed the port) and still no success.
I check if datastore, workspace is available and that layer "poi" doesn't yet exists.
Here are also some logs of request (similar for xml body):
2018-08-03 07:35:02,198 ERROR [geoserver.rest] -
com.thoughtworks.xstream.mapper.CannotResolveClassException: name at
com.thoughtworks.xstream.mapper.DefaultMapper.realClass(DefaultMapper.java:79)
at .....
Does anyone know the solution to this and got it working. I am using GeoServer 2.13.1
So i was still looking for the answer and using this post (https://gis.stackexchange.com/questions/12970/create-a-layer-in-geoserver-using-rest) got to the right content to POST featureType and hence creating a layer in GeoServer.
The documentation is off in REST API docs.
Using above link I found out that when using JSON there is a missing insertion in JSON. For API to work here we need to add:
{featureType:
name: "...",
nativeName: "...",
.
.
.}
So that it doesn't start with "name" attribute but it is contained in "featureType".
I didn't try that for XML also but I guess it could be similar.
Hope this helps someone out there struggling like I did.
Blaz is correct here, you need an outer object of FeatureType and then an inner object with your config. So;
{
"featureType": {
"name": "layer",
"nativeName": "poi",
"your config": "stuff"
}
I find though that using a post request I get very little if any response and its not obvious if the layer creation worked. But you can call http://IP:8080/geoserver/rest/layers.json to check if your new layer is there.
It costs me a lot of time to create FeatureTypes using REST API. Use Json like this really works:
{
"featureType": {
"name": "layer",
"nativeName": "poi"
"otherProperties...":"values..."
}
And use Json below to create Workspace:
{
"workspace": {
"name": "test_workspace"
}
}
The REST API is out of date now. That's disappointing. Is there anyone knows how to get the lastest REST API document?

Get all vertices having a labelname

I am using ibm graph in bluemix and new to this.
I created a graph named 'test' using the GUI provided by bluemix and uploaded the sample data 'Music Festival' provided by ibm in that graph.
Now I am trying to query all the vertices having label 'attendee' using below query.
def gt = graph.traversal();
gt.V().hasLabel("attendee");
But I am getting error as
Error: Error encountered evaluating script def gt = graph.traversal();gt.V().hasLabel("attendee"); with reason com.thinkaurelius.titan.core.TitanException: Could not find a suitable index to answer graph query and graph scans are disabled: [(~label = attendee)]:VERTEX
Not sure what I am doing wrong.
Can somebody tell where am i going wrong?
How can i get rid of this error and get the expected output?
Thanks
#Radhika, Your Gremlin query is a valid Gremlin query. However, some vendors (such as IBM Graph and Titan) chose to only allow users to start their queries with a query that is indexed.This is to make sure you get the performance of your queries. Calling hasLabel() by itself will give you the Could not find a suitable index... error as you can't create indexes for labels. What you need to do is follow this step with a step that uses a indexed property as in this query :
graph.traversal();gt.V().hasLabel("band").has("genre","pop");
An index for genre has been created in the schema for the sample music festival data as you can see below
{
"propertyKeys": [
{ "name": "name", "dataType": "String", "cardinality": "SINGLE" },
{ "name": "gender", "dataType": "String", "cardinality": "SINGLE" },
{ "name": "age", "dataType": "Integer", "cardinality": "SINGLE" },
{ "name": "genre", "dataType": "String", "cardinality": "SINGLE" },
{ "name": "monthly_listeners", "dataType": "String", "cardinality": "SINGLE" },
{ "name":"date","dataType":"String","cardinality":"SINGLE" },
{ "name":"time","dataType":"String","cardinality":"SINGLE" }
],
"vertexLabels": [
{ "name": "attendee" },
{ "name": "band" },
{ "name": "venue" }
],
"edgeLabels": [
{ "name": "bought_ticket", "multiplicity": "MULTI" },
{ "name":"advertised_to","multiplicity":"MULTI" },
{ "name":"performing_at","multiplicity":"MULTI" }
],
"vertexIndexes": [
{ "name": "vByName", "propertyKeys": ["name"], "composite": true, "unique": false },
{ "name": "vByGender", "propertyKeys": ["gender"], "composite": true, "unique": false },
{ "name": "vByGenre", "propertyKeys": ["genre"], "composite": true, "unique": false}
],
"edgeIndexes" :[
{ "name": "eByBoughtTicket", "propertyKeys": ["time"], "composite": true, "unique": false }
]
That's why the above query works and you need to do the same.
If you don't have a schema, create one. You can model it after the
one above or follow the API
doc
Create an (Vertex/Label) index for the properties that you'll start
your traversals from. In this example, Name, Gender and Genre for
vertex properties and name for the edge properties.
Call the schema
endpoint
to add your schema to your graph
It's recommended to create your schema before adding any data to
your graph so that you don't have to reindex later. That'll save you
a lot of time.
Once you create your schema, you can't modify what you created
already, but you can add new properties/indexes later on.
Look at the following code samples for Java and Nodejs for the exact code to use.
I hope that helps