The domain name specified for the docker container started by cloud-run has not taken effect - gcloud

As the title says, I've been stuck here for two days now, and I haven't been able to map my custom domain name to the docker-run container.
https://hello-world.rxliuli.com/ => https://hello-world-nzgzxtw2lq-uc.a.run.app/
Test results at mxtoolbox.com/SuperTool.aspx
All other dns pointing to github.io are in effect
The rxliuli.com domain is verified on Google Console
I don't know what happened, please let me know if you need additional information

The domain propagation time for Cloud Run (fully managed) is typically 24 hours. Although the SSL certificate can be obtained within 15 minutes, the entire procedure can take up to 24 hours.
If the domain is still not working after 24 hours, it means that something went wrong during the configuration phase or a step was overlooked. Here is the documentation for Mapping Custom Domains.
Putting and removing records could be the primary solution, as John mentioned.
I'd suggest giving it some time between attempts.

Related

How to create a cloud variable in Scratch

All over the internet I see examples/tutorials in which they create cloud variables. But when I create a variable (I've used scratch 2 and 3) I get
But I would like to get is this:
I just watch a youtube tutorial in which was told that for cloud to work you have to be a scratcher for at least 3 month (I have an account for 4 months now). So what do I have to do to get the cloud checkbox?
There are two levels (for normal users) of scratchers: a New Scratcher, and a Scratcher.
When you create an account, you get the status of a New Scratcher. This is what your status is now:
Then, to become a Scratcher, there are certain secret criteria you have to meet. It's some sort of combination of having like three projects shared, having commented five times, and following a few users, if I recall correctly. Then you'll get the Scratcher status, and it'll look like this (using me as an example):
Basically - you have to participate more on the site to lose the status of New Scratcher. Here are the requirements for becoming a full-on Scratcher.
Once you meet those criteria, head to the "data" section of blocks, press "create a new variable", and then tick the "cloud variable" checkbox. Note that unlike regular variables, cloud variables can only contain numbers up to 256 characters long per variable with a maximum of ten variables per project ID (trailing and leading zeroes are allowed), not letters or non-numeric characters apart from 0123456789.eE-+.
You have to become a "Scratcher", not a "New Scratcher".
You just have to sign in.It was a mistake that I also realised when I started working with cloud variables. Even if you have an account and it still does that, just verify your email, it will work.

how to block specific IP addreess with mod_security after specific times in one minute

Well, normally I'm not the person intended to do that, I'm a PHP developer and have general knowledge about Apache and security administration, but for emergency only I have to do this now.
I'm in a situation where I need to write Mod_Security rule that:
- blocks specific IP address from access our website,
- for 5 minutes
- if it try to call more than 10 links in less than 10 seconds
Can I achieve that writing a mod_security rule?
ModSecurity can do this, but wouldn't suggest it.
Have a look at the DOS rules in the OWASP CRS: https://github.com/SpiderLabs/owasp-modsecurity-crs/blob/master/experimental_rules/modsecurity_crs_11_dos_protection.conf. Note these do depend on set up in the main CRS setup file: https://github.com/SpiderLabs/owasp-modsecurity-crs/blob/master/modsecurity_crs_10_setup.conf.example
However ModSecurity collections are not the most stable especially for high volume. You run into problems with multiple threads accessing the collection file. Also might find you have to delete the collection file regularly (e.g. every 24 hours) to prevent it continually growing.

Billing by tag in Google Compute Engine

Google Compute Engine allows for a daily export of a project's itemized bill to a storage bucket (.csv or .json). In the daily file I can see X-number of seconds of N1-Highmem-8 VM usage. Is there a mechanism for further identifying costs, such as per tag or instance group, when a project has many of the same resource type deployed for different functional operations?
As an example, Qty:10 N1-Highmem-8 VM's are deployed to a region in a project. In the daily bill they just display as X-seconds of N1-Highmem-8.
Functionally:
2 VM's might run a database 24x7
3 VM's might run batch analytics operation averaging 2-5 hrs each night
5 VM's might perform a batch operation which runs in sporadic 10 minute intervals through the day
final operation writes data to a specific GS Buckets, other operations read/write to different buckets.
How might costs be broken out across these four operations each day?
The Usage Logs do not provide 'per-tag' granularity at this time and it can be a little tricky to work with the usage logs but here is what I recommend.
To further break down the usage logs and get better information out of em, I'd recommend trying to work like this:
Your usage logs provide the following fields:
Report Date
MeasurementId
Quantity
Unit
Resource URI
ResourceId
Location
If you look at the MeasurementID, you can choose to filter by the type of image you want to verify. For example VmimageN1Standard_1 is used to represent an n1-standard-1 machine type.
You can then use the MeasurementID in combination with the Resource URI to find out what your usage is on a more granular (per instance) scale. For example, the Resource URI for my test machine would be:
https://www.googleapis.com/compute/v1/projects/MY_PROJECT/zones/ZONE/instances/boyan-test-instance
*Note: I've replaced the "MY_PROJECT" and "ZONE" here, so that's that would be specific to your output along with the name of the instance.
If you look at the end of the URI, you can clearly see which instance that is for. You could then use this to look for a specific instance you're checking.
If you are better skilled with Excel or other spreadsheet/analysis software, you may be able to do even better as this is just an idea on how you could use the logs. At that point it becomes somewhat a question of creativity. I am sure you could find good ways to work with the data you gain from an export.
9/2017 update.
It is now possible to add user defined labels, then track usage and billing by these labels for Compute and GCS.
Additionally, by enabling the billing export to Big Query, it is then possible to create custom views or hit Big Query in a tool more friendly to finance people such as Google Docs, Data Studio, or anything which can connect to Big Query. Here is a great example of labels across multiple projects to split costs into something friendlier to organizations, in this case a Data Studio report.

Do multiple scripts (projects) contribute to Trigger Aggregate Execution Time?

I have ScriptA with some functions in files that have triggers that all run under UserA and consume about 2 hours of runtime per day.
I have another project ScriptB with some other functions in other files that have triggers that all run under UserA (the same user as ScriptB users) and consume about 3 hours of runtime per day.
Is my Trigger Aggregate Execution Time quota (from quota page here) aggregated per user or per script? That is, is it:
Five hours (2 + 3) for UserA or is it
Two hours for ScriptA and 3hrs for ScriptB?
I have seen this answer but it doesn't explicitly address the scoping question I'm asking.
Obviously is per user not ler script. Otherwise quotas wouldnt make sense.
In the interests of getting some evidence together for this:
At 4m25 in this March 2013 episode of Google Apps Unscripted, Kalyan Reddy says that the quotas are "per account type" and as you can see in the dashboard, the Quota table is gridded and has columns labelled with those account types too.
I have also done some testing and made a script that uses quite a bit of time. It started to max out other scripts running under the same account and many of that account's triggered scripts started to get errors "Service using too much computer time for one day". But... interestingly, after a couple of days of those errors have subsided. I believe on a consumer account I am now getting way more execution time than 1 hr per day.
While not a direct answer to the question and still a leap of logic/assumption, these two things make me feel that "per account" is more likely to be correct than "per script". I'll keep the question open for a bit longer for any comments (esp Googlers).

GPS Specific questions for Service application

I am working on a simple application that I need to be run as a service and report gps position every 3 minutes. I already have a working example based on the tutorial, but still have the followin doubts.
The starting of the service GPS1.Start(5*60*1000, 0)
Says first parameter is time lapse, and 2nd parameter is distance difference, How is determined, based on prior position ?
If I want to do what I stated before and I am scheduling / starting service every 3 minutes, this means I will need to ask a GPS1.Start(0,0) to get latest fix? what would be the gain to use the parameters?
I trying in a NexusOne and the Time object comes with local time, I have to do this to make it UTC but this is a tweak to the code. Is this a standard or could It change based on Phone model ? hora=DateTime.Date(Location1.Time + 6*DateTime.TicksPerHour)
thanks
If you are only interested in a single fix each time then you should pass 0, 0. These values affect the frequency of subsequent events.
You can find the time zone with the code posted here: GetTimeZone