Which region is a google cloudshell in? - gcloud

Given a google cloudshell, is there a way of finding out which region it is in?
Many thanks, Max
p.s. I know that I can poke around to find the IP address and geolocate it, for example this curl freegeoip.net/xml/$(curl ifconfig.co) claims that the machine is in Vaduz, Liechtenstein. However I would somewhat expect that there is something like an IP address that I can curl to get the cloudconfig and that that would contain the region and availability zone.

Since Cloud Shell runs as a GCP VM, you can view your assigned region by running
curl -H "Metadata-Flavor: Google" metadata/computeMetadata/v1/instance/zone
inside the Cloud Shell session. This is documented in https://cloud.google.com/compute/docs/storing-retrieving-metadata.
In general, it's worth keeping in mind that Cloud Shell is globally distributed across multiple GCP regions. When a user connects for the first time, the system assigns them to the geographically closest region that can accommodate new users. While the users cannot manually chose their region, the system does its best to pick the closest region Cloud Shell operates in. If Cloud Shell does not initially pick the closest region or if the user later connects from a location that's geographically closer to a different region, Cloud Shell will migrate the user to a closer region on session end.

Related

Multi Region Postgres Latency Issue Azure

Architecture which we are using currently is as below
Private Web App Services hosted in US Region and India Region.
Both the apps are behind the respective App Gateway, this app Gateway is behind the front door which helps us serve the request from the nearest app gateway. But Both apps uses the same postgres which is present in US region.
Now our issue is when we hit the api from US response time is less then 2sec whereas when we hit the api from India region it takes 70sec.
How can we reduce the latency ?
Actually, the problem is the APIs does write operation due to which we cannot a read replica.
There a few things you can do
1- Add a cache layer to both regions and rather than querying directly on DB, check if the data is available in the cache first, and if it's not, get it from DB and add to the cache layer.
2- Add a secondary database on India region which will be a read only.
PS: You may have stale data with both approaches so you should sync properly according to your requirements

AWS landing zone home region and resource restrictions

My current understanding is that if I were to set up a Multi Account Landing Zone ( MALZ) in one region , say for example Ireland, I will still be able to have accounts that can contain resources in other regions ( US , Frankfurt et al ) assuming the guardrails allows .
Is my understanding correct ? I am bit confused when I read this
Single AWS Region. AMS multi-account landing zone is restricted to a single AWS Region. To span multiple AWS Regions, use multiple multi-account landing zone.
https://docs.aws.amazon.com/managedservices/latest/userguide/single-or-multi-malz.html
AWS managed service is a bit of a white-glove service so I'm not familiar how standardised their offering and guard rails are. There's a few different parts that come into play
regions that host your landing zone shared infrastructure, e.g. logging account, control tower, AWS SSO etc.
regions that host shared infrastructure that you deploy into every account managed under the landing zone, e.g. a default VPC (peered to a TGW)
regions that are allowed to be used in managed accounts, e.g. because an SCP on the OU forbids everything else
From my understanding it seems that one AMS multi-account landing zone always operates in a single region for all three of those.
May be a fine restriction for starting out, but my experience with large landing zones (> 500 Accounts) is that you start keeping 1. and 2. locked to a single region, but keep 3. restricted only for governance/compliance reasons (e.g. EU only). That gives teams the freedom to leverage AWS regions the way that makes the most sense to their applications like lambda edge functions, regional s3 buckets etc.
Of course, applications that do need on-premise connectivity have a strong gravity to the region hosting transit gateway. Depending on how your on-prem looks like, larger orgs can later add multiple landing zones or even preferably use a modular landing zone approach with "TGW peerings as a service".

How do I find out which files were downloaded outside my continent (and by whom)?

I have been monitoring Cloud Storage billing daily and saw two unexpected, large spikes in "Download Worldwide Destinations (excluding Asia & Australia)" this month. The cost for this SKU is typically around US$2-4 daily; however, these two daily spikes have been $89 and $15.
I have enabled GCS Bucket Logging soon after the $89 spike, hoping to deduce what causes it the next time it happens, but when the $15 spike happened yesterday, I was unable to pinpoint which service or files downloaded have caused this spike.
There is a Log field named Location, but it appears to be linked to the region where a bucket is located, not the location of the downloader (that would contribute to the "Worldwide Destinations" egress).
As far as I know, my services are all in the southamerica-east1 region, but it's possible that there is either a legacy service or a misconfigured one that has been responsible for these spikes.
The bucket that did show up outside my region is in the U.S., but I concluded that it is not responsible for the spikes because the files there are under 30 kB and have only been downloaded 8 times according to the logs.
Is there any way to filter the logs so that it tells me as much information as possible to help me track down what is adding up the "Download Worldwide Destinations" cost? Specifically:
which files were downloaded
if it was one of my Google Cloud services, which one it was
Enable usage logs and export the log data to a new bucket.
Google Cloud Usage logs & storage logs
The logs will contain the IP address, you will need to use a geolocation service to map IP addresses to city/country.
Note:
Cloud Audit Logs do not track access to public objects.
Google Cloud Audit Logs restrictions

Import a custom Linux image for POWER-IAAS part of the IBM Cloud?

I am trying to import a cloud-enabled Debian Linux image for the Power architecture to run on the IBM public cloud, which supports this architecture.
I think I am following the instructions, but the behavior I am seeing is that, at image-import-time, after filling in all the relevant information, when I hit the "import" button, the GUI just exits silently, with no apparent effect, and no reported error.
I am reasonably experienced doing simple iaas stuff on AWS, but am new to the IBM cloud, and have not deployed a custom image on any cloud provider. I'm aware of "cloud-init", and have a reasonable general knowledge of what problem it solves (mapping cloud-provider metadata to config entries in the resulting VM at start-time), but not a great deal about how it actually works.
What I have done is:
Got an IBM cloud account, and upgraded out of the free tier, for access to Power.
Activated the Power Systems Virtual Server service.
Activated the Cloud Object Storage service.
Created a bucket in the COS.
Created an HMAC-enabled service credential for this bucket.
Uploaded my image, in .tar.gz format, to the bucket (via the CLI, it's too big to upload by GUI).
The image is from here -- that page is a bit vague on which cloud providers it may be expected to work with, but AFAIK the IBM cloud is the only public cloud supporting Power?
Then, from the Power Systems Virtual Server service page, I clicked the "Boot Images" item on the left, to show the empty list, then "Import Image" at the top of the list, and filled in the form. I have answers for all of the entries -- I can make up a new name, I know the region of my COS, the image file name" (the "key", in key-object storage parlance), the bucket name, and the access key and secret keys, which are available from the credential description in the COS panel.
Then the "import" button lights up, and I click it, and the import dialog disappears, no error is reported, and no image is imported.
There are various things that might be wrong that I'm not sure how to investigate.
It's possible the credential is not connected to the bucket in the right way, I didn't really understand the documentation about that, but in the GUI it looks like it's in the right scope and has the right data in it.
It's also possible that only certain types of images are allowed, and my image is failing some kind of validation check, but in that case I would expect an error message?
I have found the image-importing instructions for the non-Power-IAAS, but it seems like it's out of scope. I have also found some docs on how to prepare a custom image, but they also seem to be non-Power-IAAS.
What's the right way to do this?
Edit to add: Also tried doing this via the CLI ("ibmcloud pi image-import"), where it gets a time-out, apparently on the endpoint that's supposed to receive the image. Also, the command-line tool has an --os-type flag that apparently only takes [aix | sles | redhat | ibmi] -- my first attempt used raw, which is an error.
This is perhaps additional evidence that what I want to do is actually impossible?
PowerVS supports only .ova images. Those are not the same supported by VMWare, for instance.
You can get from here https://public.dhe.ibm.com/software/server/powervs/images/
Or you can use the images available in the regional pool of images:
ibmcloud pi image-list-catalog
Once you have your first VM up and running you can use https://github.com/ppc64le-cloud/pvsadm to create a new .ova. Today the tool only supports RHEL, CentOS and CoreOS.
If you want to easily play with PowerVS you can also use https://github.com/rpsene/powervs-actions.

Google Compute engine and no files in bucket

I have a client and whoever designed their site put it in Compute Engine. I am totally lost, no clue about this. I do see a bucket but there is only a footer.php in it. The site is a multi wordpress and I can not find where the files are stored or how to access phpmyadmin to see the database.
I ask this because the site is having many issues, starting with ssl expired, php is out of date and now I can not login or see the site because it is giving a 500 error or white page of death.
Tried to find what caused the error but nothing.
Site is http://nextstudy.org
Can anyone help or direct me on what I can do to get to the files and maybe get it off of compute engine?
Appreciate you reading this............
Diana
GCE does not host files from a bucket, but it runs VM instances off disk images.
Unless being assigned an admin role in Cloud IAM, there's probably not much to do. And even with an admin role granted, it's still rather risky when having no clue, I mean, while it's only a single instance, Cloud Shell might help, but when it's an instance group, the deployment may work whole different (up until the point where the servers are spun up from nothing but a shell script, which subsequently makes editing individual instances quite meaningless).