Whats the status of External Master Replication beta in Google Cloud SQL - google-cloud-sql

I have a compute image that is running a mySql database, with confirmed access to the outside world.
When using the curl script to create an 'interface' to this database no errors are returned. However in the console GUI there is a warning triangle and no read replica can be created.
Are there any known issues with this functionality and/or are there any ways that I could get out some more logging in the response?

Related

Can't upgrade azure Postgres flexible server or add more vCores

I am trying to upgrade my postgreSQL server on azure from D2s_v3 to D4s_v3 but the options are not available as shown in the image below.
Sometimes scaling PostgreSQL Flexi Server will require a server restart. It will only allow two-fold increments, and the main point is that it should not allow decreasing existing server storage.
In order to replica the same, created a new flexi server with D2s_v3
configuration shown as below.
Updated storage with D4s_v3
Verification:
Upon restart request change will be done.

Why does my Tarantool Cartridge retrieve data from router instance sometimes?

I wonder why my tarantool cartridge cluster is not woring as it should.
I have a cartridge cluster running on kubernetes and cartridge image is generated from cartridge cli cartridge pack, and no changes were made to the those generated files.
Kubernetes cluster is deployed via helm with the following values:
https://gist.github.com/AlexanderBich/eebcf67786c36580b99373508f734f10
Issue:
When I make requests from pure php tarantool client, for example SELECT sql request it sometimes retrieves the data from storage instances but sometimes unexpectedly it responds to me with the data from router instance instead.
Same goes for INSERT and after I made same schema in both storage and router instances and made 4 requests it resulted in 2 rows being in storage and 2 being in router.
That's weird and as per reading the documentation I'm sure it's not the intended behaviour and I'm struggling to find the source of such behaviour and hope for your help.
SQL in tarantool doesn't work in cluster mode e.g. with tarantool-cartridge.
P.S. that was the response to my question from tarantool community in tarantool telegramchat

MongoDB - not authorized in shared cluster despite of atlasAdmin role

I have a problem with a shared mongodb cluster: I try to get data via the nodejs implementation of mongodb. A few days ago, it worked just fine. But now, every getmore command I send to the cluster is very, very slow. So I thought: Maybe I just have to turn it off and on again.
So I tried to connect to the cluster with the mongo shell. Everything works fine, my user has the atlasAdmin role (with can be seen via db.getUser("admin")), but when I try to execute commands like db.shutdownServer() or show users, the server tells me that I'm not authorized. Even the command "db.auth("admin", ...pw...)" returns 1.
After some research, I found out I have to shutdown the server to have a chance to fix this problem. But without permission, how should I do it? Is there any other possibility to perform this, like a button on the atlas webapp or something?
Atlas is a hosted service, so the privileges are different vs. a bare metal MongoDB server. From MongoDB Database User Privileges This is the list of privileges of atlasAdmin:
readWriteAnyDatabase
readAnyDatabase
dbAdminAnyDatabase
clusterMonitor
cleanupOrphaned
enableSharding
flushRouterConfig
moveChunk
splitChunk
viewUser
shutdown privilege is part of the hostManager role, which is not included in the list above.
Depending on your type of Atlas deployment, here are the list of restricted commands/privileges:
Unsupported Commands in M0/M2/M5 Clusters
Unsupported Commands in M10+ Clusters
If you need to "turn on and off" your deployment, you might be able to use the Test Failover button if your type of deployment supports it. That button will step down the primary node and elect a new primary, which for most cases is almost equivalent to "turn off and on again".

AWS elastic search cluster becoming unresponsive

we have several AWS elastic search domains which sometimes become unresponsive for no apparent reason. The ES endpoint as well as Kibana return bad gateway errors after a few minutes of trying to load the resources.
The node status message is the following (not that it's any help):
/_cluster/health: {"code":"ProxyRequestServiceException","message":"Unable to execute HTTP request: Read timed out (Service: null; Status Code: 0; Error Code: null; Request ID: null)"}
Error logs are activated for the cluster but do not show anything relevant for the time at which the cluster became inactive.
I would like to at least be able to restart the cluster but the status remains "processing" seemingly forever.
Unfortunately, if you are using the AWS ElasticSearch Service (as in not building it on your own EC2 instances), many... well... MOST... of the admin API's and capabilities are restricted so you cannot dig as much into it as you could if you built it from the ground up.
I have found that AWS Support does a pretty good job in getting to the bottom of things when needed, so I would suggest you open a support ticket.
I wish this wasn't the case, but using their service is nice and easy (as in you don't have to build and maintain the infra yourself), but you lose a LOT of capabilities from an Admin or Troubleshooting perspective. :(

Why would running a container on GCE get stuck Metadata request unsuccessful forbidden (403)

I'm trying to run a container in a custom VM on Google Compute Engine. This is to perform a heavy ETL process so I need a large machine but only for a couple of hours a month. I have two versions of my container with small startup changes. Both versions were built and pushed to the same google container registry by the same computer using the same Google login. The older one works fine but the newer one fails by getting stuck in an endless list of the following error:
E0927 09:10:13 7f5be3fff700 api_server.cc:184 Metadata request unsuccessful: Server responded with 'Forbidden' (403): Transport endpoint is not connected
Can anyone tell me exactly what's going on here? Can anyone please explain why one of my images doesn't have this problem (well it gives a few of these messages but gets past them) and the other does have this problem (thousands of this message and taking over 24 hours before I killed it).
If I ssh in to a GCE instance then both versions of the container pull and run just fine. I'm suspecting the INTEGRITY_RULE checking from the logs but I know nothing about how that works.
MORE INFO: this is down to "restart policy: never". Even a simple Centos:7 container that says "hello world" deployed from the console triggers this if the restart policy is never. At least in the short term I can fix this in the entrypoint script as the instance will be destroyed when the monitor realises that the process has finished
I suggest you try creating a 3rd container that's focused on the metadata service functionality to isolate the issue. It may be that there's a timing difference between the 2 containers that's not being overcome.
Make sure you can ‘curl’ the metadata service from the VM and that the request to the metadata service is using the VM's service account.