AWS::WAFv2::LoggingConfiguration encounter invalid ARN when creating stack with Cloudformation - aws-cloudformation

I have a Cloudformation template that creates a WAFv2 along with Cloudwatch Logging. I encountered an issue when trying to set the LoggingConfiguration. The actual error I got looks something like this:
Resource handler returned message: "Error reason: The ARN isn't valid. A valid ARN begins with arn: and includes other information separated by colons or slashes., field: LOG_DESTINATION, parameter: arn:aws:logs:us-east-1:xxxxx:log-group:aws-waf-bar-foo:*
My LoggingConfiguration looks something like this:
"webAcllogging": {
"Type": "AWS::WAFv2::LoggingConfiguration",
"Properties": {
"ResourceArn": {
"Fn::GetAtt": [
"webAcl",
"Arn"
]
},
"LogDestinationConfigs": [
{
"Fn::Sub": "arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:aws-waf-bar-foo:*"
}
],
"RedactedFields": [
{
"SingleHeader": {
"Name": "password"
}
}
]
}
},
I tried changing a few things and I still encounter this error. Anyone knows why?

It turns out that you have to use a special naming convention for WAF logs.
The name needs to be prefixed by aws-waf-logs-.
So the LogDestinationConfigs should as follows:
"LogDestinationConfigs": [
{
"Fn::Sub": "arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:aws-waf-logs-bar-foo:*"
}
],

Related

Cannot parametrize any value under placement.managedCluster.config

My goal is to create dataproc workflow template from python code. Meanwhile I want to have ability to parametrize placement.managedCluster.config.gceClusterConfig.subnetworkUri field during template instantiation.
I read template from json file like:
{
"id": "bigquery-extractor",
"placement": {
"managed_cluster": {
"config": {
"gce_cluster_config": {
"subnetwork_uri": "some-subnet-name"
},
"software_config" : {
"image_version": "1.5"
}
},
"cluster_name": "some-name"
}
},
"jobs": [
{
"pyspark_job": {
"args": [
"job_argument"
],
"main_python_file_uri": "gs:///path-to-file"
},
"step_id": "extract"
}
],
"parameters": [
{
"name": "CLUSTER_NAME",
"fields": [
"placement.managedCluster.clusterName"
]
},
{
"name": "SUBNETWORK_URI",
"fields": [
"placement.managedCluster.config.gceClusterConfig.subnetworkUri"
]
},
{
"name": "MAIN_PY_FILE",
"fields": [
"jobs['extract'].pysparkJob.mainPythonFileUri"
]
},
{
"name": "JOB_ARGUMENT",
"fields": [
"jobs['extract'].pysparkJob.args[0]"
]
}
]
}
code snippet I use:
options = ClientOptions(api_endpoint="{}-dataproc.googleapis.com:443".format(region))
client = dataproc.WorkflowTemplateServiceClient(client_options=options)
template_file = open(path_to_file, "r")
template_dict = eval(template_file.read())
print(template_dict)
template = dataproc.WorkflowTemplate(template_dict)
full_region_id = "projects/{project_id}/regions/{region}".format(project_id=project_id, region=region)
try:
client.create_workflow_template(
parent=full_region_id,
template=template
)
except AlreadyExists as err:
print(err)
pass
when I try to run this code I get the following error:
google.api_core.exceptions.InvalidArgument: 400 Invalid field path placement.managed_cluster.configuration.gce_cluster_config.subnetwork_uri: Field gce_cluster_config does not exist.
This behavior is the same also if I try to parametrize placement.managedCluster.config.softwareConfig.imageVersion, I will get
google.api_core.exceptions.InvalidArgument: 400 Invalid field path placement.managed_cluster.configuration.software_config.image_version: Field software_config does not exist.
But if I exclude any field under placement.managedCluster.config from parameters map, template is created successfully.
I didn't find any restriction on parametrizing these fields. Is there any? Or is it just me doing something wrong?
This doc listed the parameterizable fields. It seems that only managedCluster.name of managedCluster is parameterizable:
Managed cluster name. Dataproc will use the user-supplied name as the name prefix, and append random characters to create a unique cluster name. The cluster is deleted at the end of the workflow.
I don't see managedCluster.config parameterizable.

Can not create new layer (featuretype) in GeoServer using REST API

So I just used 2 working days trying to figure this out. We are automatic rendering process for maps. All the data is given in SQL base and my job is to write "wrapper" so we can implement this in our in-house framework. I managed all but one needed requests.
That request is POST featuretype since this is a way of creating a layer that can later be rendered.
I have all requests saved in postman for pre-testing on example data given by geoserver itself. I can't even get response with status code 201 and always get 500 internal server error. This status is described as possible syntax error in sytax. But I actually just copied and pasted exampled and used geoserver provided data.
This is the requst: http://127.0.0.1:8080/geoserver/rest/workspaces/tiger/datastores/nyc/featuretypes
and its body:
{
"name": "poi",
"nativeName": "poi",
"namespace": {
"name": "tiger",
"href": "http://localhost:8080/geoserver/rest/namespaces/tiger.json"
},
"title": "Manhattan (NY) points of interest",
"abstract": "Points of interest in New York, New York (on Manhattan). One of the attributes contains the name of a file with a picture of the point of interest.",
"keywords": {
"string": [
"poi",
"Manhattan",
"DS_poi",
"points_of_interest",
"sampleKeyword\\#language=ab\\;",
"area of effect\\#language=bg\\;\\#vocabulary=technical\\;",
"Привет\\#language=ru\\;\\#vocabulary=friendly\\;"
]
},
"metadataLinks": {
"metadataLink": [
{
"type": "text/plain",
"metadataType": "FGDC",
"content": "www.google.com"
}
]
},
"dataLinks": {
"org.geoserver.catalog.impl.DataLinkInfoImpl": [
{
"type": "text/plain",
"content": "http://www.google.com"
}
]
},
"nativeCRS": "GEOGCS[\"WGS 84\", \n DATUM[\"World Geodetic System 1984\", \n SPHEROID[\"WGS 84\", 6378137.0, 298.257223563, AUTHORITY[\"EPSG\",\"7030\"]], \n AUTHORITY[\"EPSG\",\"6326\"]], \n PRIMEM[\"Greenwich\", 0.0, AUTHORITY[\"EPSG\",\"8901\"]], \n UNIT[\"degree\", 0.017453292519943295], \n AXIS[\"Geodetic longitude\", EAST], \n AXIS[\"Geodetic latitude\", NORTH], \n AUTHORITY[\"EPSG\",\"4326\"]]",
"srs": "EPSG:4326",
"nativeBoundingBox": {
"minx": -74.0118315772888,
"maxx": -74.00153046439813,
"miny": 40.70754683896324,
"maxy": 40.719885123828675,
"crs": "EPSG:4326"
},
"latLonBoundingBox": {
"minx": -74.0118315772888,
"maxx": -74.00857344353275,
"miny": 40.70754683896324,
"maxy": 40.711945649065406,
"crs": "EPSG:4326"
},
"projectionPolicy": "REPROJECT_TO_DECLARED",
"enabled": true,
"metadata": {
"entry": [
{
"#key": "kml.regionateStrategy",
"$": "external-sorting"
},
{
"#key": "kml.regionateFeatureLimit",
"$": "15"
},
{
"#key": "cacheAgeMax",
"$": "3000"
},
{
"#key": "cachingEnabled",
"$": "true"
},
{
"#key": "kml.regionateAttribute",
"$": "NAME"
},
{
"#key": "indexingEnabled",
"$": "false"
},
{
"#key": "dirName",
"$": "DS_poi_poi"
}
]
},
"store": {
"#class": "dataStore",
"name": "tiger:nyc",
"href": "http://localhost:8080/geoserver/rest/workspaces/tiger/datastores/nyc.json"
},
"cqlFilter": "INCLUDE",
"maxFeatures": 100,
"numDecimals": 6,
"responseSRS": {
"string": [
4326
]
},
"overridingServiceSRS": true,
"skipNumberMatched": true,
"circularArcPresent": true,
"linearizationTolerance": 10,
"attributes": {
"attribute": [
{
"name": "the_geom",
"minOccurs": 0,
"maxOccurs": 1,
"nillable": true,
"binding": "com.vividsolutions.jts.geom.Point"
},
{},
{},
{}
]
}
}
So it is example case and I can't get any useful response from the server. I get the code 500 with body name (the first item in json). Similarly I get same code with body FeatureTypeInfo when trying with xml body(first tag).
I already tried the request in new instance of geoserver in Docker (changed the port) and still no success.
I check if datastore, workspace is available and that layer "poi" doesn't yet exists.
Here are also some logs of request (similar for xml body):
2018-08-03 07:35:02,198 ERROR [geoserver.rest] -
com.thoughtworks.xstream.mapper.CannotResolveClassException: name at
com.thoughtworks.xstream.mapper.DefaultMapper.realClass(DefaultMapper.java:79)
at .....
Does anyone know the solution to this and got it working. I am using GeoServer 2.13.1
So i was still looking for the answer and using this post (https://gis.stackexchange.com/questions/12970/create-a-layer-in-geoserver-using-rest) got to the right content to POST featureType and hence creating a layer in GeoServer.
The documentation is off in REST API docs.
Using above link I found out that when using JSON there is a missing insertion in JSON. For API to work here we need to add:
{featureType:
name: "...",
nativeName: "...",
.
.
.}
So that it doesn't start with "name" attribute but it is contained in "featureType".
I didn't try that for XML also but I guess it could be similar.
Hope this helps someone out there struggling like I did.
Blaz is correct here, you need an outer object of FeatureType and then an inner object with your config. So;
{
"featureType": {
"name": "layer",
"nativeName": "poi",
"your config": "stuff"
}
I find though that using a post request I get very little if any response and its not obvious if the layer creation worked. But you can call http://IP:8080/geoserver/rest/layers.json to check if your new layer is there.
It costs me a lot of time to create FeatureTypes using REST API. Use Json like this really works:
{
"featureType": {
"name": "layer",
"nativeName": "poi"
"otherProperties...":"values..."
}
And use Json below to create Workspace:
{
"workspace": {
"name": "test_workspace"
}
}
The REST API is out of date now. That's disappointing. Is there anyone knows how to get the lastest REST API document?

Kubernetes CalculateNodeLabelPriority does not work

I need to prioritize pod creation on nodes according a given node label. I used CalculateNodeLabelPriority to enforce the rules I need. However when I start the kube-scheduler I get following error
F0217 10:52:18.020751 3198 plugins.go:198] Invalid configuration: Priority type not found for CalculateNodeLabelPriority
I looked at master and I can see that CalculateNodeLabelPriority priority is not registered in the defaultPriorities
https://github.com/kubernetes/kubernetes/blob/master/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go
Why isn't it registered although is mentioned in priorities.go? Is there a way I can register it?
Managed to fix the issue by adding a argument variable. Code checks for the argument object in order to accept the Priority object
https://github.com/kubernetes/kubernetes/blob/release-1.1/plugin/pkg/scheduler/factory/plugins.go#L98
{
"predicates": [
{
"name": "HostName"
},
{
"name": "MatchNodeSelector"
},
{
"name": "PodFitsResources"
}
],
"priorities": [
{
"name": "NewNodeLabelPriority",
"weight": 1,
"argument": {
"labelPreference": {
"label": "pdc",
"presence": true
}
}
}
]
}

RIPE: How to lookup IP Address using REST API

As per the RIPE REST API documentation, one needs to specify the requests in the following format:
http://rest.db.ripe.net/{source}/{objecttype}/{key}
So I am assuming that looking up an IP address will be like this:
http://rest.db.ripe.net/ripe/inetnum/193.0.6.142.json
However, the response I get is :
{
"link": {
"type": "locator",
"href": "http://rest.db.ripe.net/ripe/inetnum/193.0.6.142"
},
"errormessages": {
"errormessage": [
{
"severity": "Error",
"text": "ERROR:101: no entries found\n\nNo entries found in source %s.\n",
"args": [
{
"value": "RIPE"
}
]
}
]
},
"terms-and-conditions": {
"type": "locator",
"href": "http://www.ripe.net/db/support/db-terms-conditions.pdf"
}
}
What am I doing wrong ?
You are using the wrong URL, the correct URL for your example query would be:
http://rest.db.ripe.net/search.json?query-string=193.0.0.0/21&flags=no-filtering
Or this for XML:
http://rest.db.ripe.net/search.xml?query-string=193.0.0.0/21&flags=no-filtering
Looks like https://rest.db.ripe.net/search.json?query-string=193.0.6.142 is the correct link to use. This seems to return back the same data as I see on ripe.net
You didn't write {key} part right. Inetnum objects on RIPE have "193.0.0.0 - 193.0.7.255" type of key. You must make a request like this:
https://rest.db.ripe.net/ripe/inetnum/91.123.16.0 - 91.123.31.255

How to set user name and group name in IAM using CloudFormation?

I created a CloudFormation template and I wanted to create IAM user, to do that I used this JSON string:
"CFNUser" : {
"Type" : "AWS::IAM::User",
"Properties" : {
"LoginProfile": {
"Password": { "Ref" : "AdminPassword" }
}
}
},
Then for group I used this:
"CFNUserGroup" : {
"Type" : "AWS::IAM::Group"
},
After creating the stack, I got the following:
user name - IAMUsers-CFNUser-E1BT342YK7G6
group name - IAMUsers-CFNUserGroup-1UBUBRYALTIMI
So my question is, how can I set the user name here? same goes for the group name?
After talking with one of the AWS support, at this time of writing, it is not possible to specify your own username and group name in IAM using CloudFormation template :-(
Maybe there's a reason why they do not allow user to do this...anyway it's good thing that I have answer to this question and I will be glad if someone find this useful.
Amazon has added support from 20th July 2016.
https://aws.amazon.com/about-aws/whats-new/2016/07/aws-cloudformation-adds-support-for-aws-iot-and-additional-updates/
{
"Type": "AWS::IAM::User",
"Properties": {
"Groups": [ String, ... ],
"LoginProfile": LoginProfile Type,
"ManagedPolicyArns": [ String, ... ],
"Path": String,
"Policies": [ Policies, ... ],
"UserName": String
}
}
For groups, it's a GroupName property:
"CFNUserGroup" : {
"Type" : "AWS::IAM::Group",
"Properties": {
"GroupName": "My_CFN_User_Group"
}
}