Duplicate hosted zones for hosted zone name with cloud formation - aws-cloudformation

I am struggling with the following problem.
I am creating stacks with Cloudformation (The stacks are the same) each stack has a VPC and a private hosted zone with the same route (lets say me.company.com).
The first stack is created fine, the second stack get the following error:
"Duplicate hosted zones for hosted zone name me.company.com.: ZAIN4N303O6JL, Z36EHAOJPLFUVJ"
When i try to do the same via the AWS console i don't get that problem, what exactly is the problem here and how do i solve it.
Thanks.

You never posted the template, but I'm guessing your stack wasn't creating a Hosted Zone, but was actually creating a Record Set which pointed to a Hosted Zone. In any case, that's what I was doing when I got the exact same message. If I'm correct, then the message is fairly clear. Whether you can see them or not, in your second case there are two Hosted Zones with the same name: "me.company.com.". If you paste the Zone Id into the search bar, you should be able to see the individual Hosted Zones. You will either need to delete one of the two hosted zones before creating the Record Set, or alter your template to use the HostedZoneId parameter (which is guaranteed unique) instead of the HostedZoneName parameter when you create the Record Set.

Related

I can't find CloudWatch metric in Grafana UI query editor/builder

I'm trying to create a Grafana dashboard that will reflect my AWS RDS cluster metrics.
For the simplicity I've chose CloudWatch as a datasource, It works well for showing the 'direct' metrics from the RDS cluster.
Problem is that we've switched to use RDS Proxy due the high number of connections we are required to support.
Now, I'm adjusting my dashboard to reflect few metrics that are lacking, most important is number of actual connections, which in AWS CloudWatch console presented by this query:
SELECT AVG(DatabaseConnections)
FROM SCHEMA("AWS/RDS", ProxyName,Target,TargetGroup)
WHERE Target = 'db:my-db-1'
AND ProxyName = 'my-db-rds-proxy'
AND TargetGroup = 'default'
Problem is that I can't find it anywhere in the CloudWatch Grafana query editor:
The only metric with "connections" is the standard DatabaseConnections which represents the 'direct' connections to the RDS cluster and not the connections to the RDS Proxy.
Any ideas?
That UI editor is generated from hardcoded list of metrics, which may not contain all metrics and dimensions (especially if they have been added recently), so in that case UI doesn't generate them in the selectbox.
But that is not a problem, because that selectbox is not a standard selectbox. It is an input, where you can write your own metric and dimension name. Just click there, write what you need and Hit enter to add (the same is applicable for:
Pro tip: don't use UI query builder (that's for beginners), but switch to Code and write your queries directly (anyway UI builder builds that query under the hood):
It would be nice if you create a Grafana PR - add these metrics and dimensions which are missing in the UI builder to metrics.go.
So for who ever will ever get here you should use ClientConnections and use the ProxyName as the dimension (which I didn't set initially
I was using old Grafana version (7.3.5) which didn't have it built in.

How the dead nodes are handled in AWS OpenSearch?

Trying to understand what is the right approach to connect to AWS OpenSearch (single cluster, multiple data nodes).
To my understanding, as long as data nodes are behind the load balancer (according to this and other AWS docs: https://aws.amazon.com/blogs/database/set-access-control-for-amazon-elasticsearch-service/), we can not use:
var pool = new StaticConnectionPool(nodes);
and we probably should not use CloudConnectionPool - as originally it was dedicated to elastic search cloud and was left in open search client by mistake?
Hence we use SingleNodeConnectionPool and it works, but I've noticed several exceptions, which indicated that node had DeadUntil set to date one hour in advance - so I was wondering if that is expected behavior, as from client's perspective that is the only node it knows about?
What is correct way to connect to AWS OpenSearch that has multiple nodes and should I be concerned about DeadUntil property?

Weird "data has been changed" issue

I'm experiencing a very weird issue with "data has been changed" errors.
I use ms access as a frontend and postgresql as backend. The backend used to be in ms access and there were no issues, then it was moved to sql server and there were no issues there either. The problem started when I moved to postgresql.
I have a table called Orders and a table called Job. Each order has multiple jobs, I have 2 forms, one parent form for the Order and one Subform for the Jobs (continuous form). I put the subform in a separate tab, first tab contains general order information and the second tab has the Job information. Job is connected Orders using a foreign key called OrderID, Id of Orders is equal to OrderID in Job.
Here is my problem:
I enter some information in the first tab, customer name, dates etc, then move to the second tab, do nothing in the second tab, go back to the first one and change a date. I get "The data has been changed" error
I'm confused as to why this is happening. Now why I call this weird?
First, if I put the subform on the first tab, I can change all fields of Orders just fine. IT's only if I put it on the second tab and, add some info, change tab, then go back and change an already existing value that I get the error
Second, if I make the subform on the second tab Unbound (so no ID - OrderID) connection, I get the SAME error
Third, the "usual" id for "The data has been changed" error is Runtime Error 440. But what I get is Runtime Error: "-2147352567 (80020009)". Searching online for this error didn't help because it can mean a lot of different things, including "The value you entered isn't valid for this field" like here:
Access Run time error - '-2147352567 (80020009)': subform
or many different results for code 80020009 but none for "the data has been changed"
MS access 2016, postgresql 12.4.1
I'm guessing you are using ODBC to connect Access to Postgresql. If so do you have timestamp fields in the data you working with? I have seen the above as the Postgres timestamp can have a higher precision then Access. This means when you go to UPDATE Access uses a truncated version of the timestamp and can't find the record and you get the error. For this and other possible causes see:
https://odbc.postgresql.org/faq.html#6.4
Microsoft Applications

Google Cloud REST API - How can return compute engine images newer than a specified creationTimestamp?

I'm using Google's Cloud API to only return disk images (compute.instances.list) created after a certain date.
I'm using the following for the filter parameter: creationTimestamp > 2019-08-02 but it's not working. I'm getting Invalid value for field 'filter': 'creationTimestamp \u003e 2019-08-02'. Invalid list filter expression.
Any ideas, or is it not possible? I can have it work using a partial date & a wildcard, using creationTimeStamp = 2019-08-0*, but that's not the same as everything after this date.
This is a known issue by Google Cloud Platform, and you can follow it progress here.
As an alternative, you could use a gcloud command. You could first format the list as, for instance, a table, and then use one of the columns (that will be the creation time stamp) to filter.
The following command
gcloud compute instances list --format="table(name,creationTimestamp)" --filter="CREATION_TIMESTAMP > 2019-08-23"
will give you a list of Compute Engine instances created after 2019-08-23 and, in this case, you will only obtain pear each GCE instance their name and the creation date.
This blog is very interesting and educational about how to use filter, formats, tables and more regarding gcloud commands.

is it possible to copyObject from one cloud object storage instance to another. The buckets are in different regions

I would like to use the node sdk to implement a backup and restore mechanism between 2 instances of Cloud Object Storage. I have added a service ID to the instances and added a permissions for the service id to access the buckets present in the instance i want to write to. The buckets will be in different regions. I have tried a variety of endpoints both legacy and non-legacy private and public to achieve this but i usually get Access Denied.
Is what I am trying to do possible with the sdk? if so can someone point me in the right direction?
var config = {
"apiKeyId": "xxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxx",
"endpoint": "s3.eu-gb.objectstorage.softlayer.net",
"iam_apikey_description": "Auto generated apikey during resource-key operation for Instance - crn:v1:bluemix:public:cloud-object-storage:global:a/xxxxxxxxxxx:xxxxxxxxxxx::",
"iam_apikey_name": "auto-generated-apikey-xxxxxxxxxxxxxxxxxxxxxx",
"iam_role_crn": "crn:v1:bluemix:public:iam::::serviceRole:Writer",
"iam_serviceid_crn": "crn:v1:bluemix:public:iam-identity::a/0xxxxxxxxxxxxxxxxxxxx::serviceid:ServiceIdxxxxxxxxxxxxxxxxxxxxxx",
"serviceInstanceId": "crn:v1:bluemix:public:cloud-object-storage:global:a/xxxxxxxxxxxxxxxxxxx:xxxxxxxxxxxxxxxxxxxxxxxxxx::",
"ibmAuthEndpoint": "iam.cloud.ibm.com/oidc/token"
}
This should work as long as you are able to properly grant the requesting user access to be able to read the source of the put-copy, so long as you are not using KeyProtect based keys.
So the breakdown here is a bit confusing due to some unintuitive terminology.
A service instance is a collection of buckets. The primary reason for having multiple instances of COS is to have more granularity in your billing, as you'll get a separate line item for each instance. The term is a bit misleading, however, because COS is a true multi-tenant system - you aren't actually provisioning an instance of COS, you're provisioning a sort of sub-account within the existing system.
A bucket is used to segment your data into different storage locations or storage classes. Other behavior, like CORS, archiving, or retention, acts on the bucket level as well. You don't want to segment something that you expect to scale (like customer data) across separate buckets, as there's a limit of ~1k buckets in an instance. IBM Cloud IAM treats buckets as 'resources' and are subject to IAM policies.
Instead, data that doesn't need to be segregated by location or class, and that you expect to be subject to the same CORS, lifecycle, retention, or IAM policies can be separated by prefix. This means a bunch of similar objects share a path, like foo/bar and foo/bas have the same prefix foo/. This helps with listing and organization but doesn't provide granular access control or any other sort of policy-esque functionality.
Now, to your question, the answer is both yes and no. If the buckets are in the same instance then no problem. Bucket names are unique, so as long as there isn't any secondary managed encryption (eg Key Protect) there's no problem copying across buckets, even if they span regions. Keep in mind, however, that large objects will take time to copy, and COS's strong consistency might lead to situations where the operation may not return a response until it's completed. Copying across instances is not currently supported.