We are using Google Cloud CDN with a backend-bucket. Everything works correctly, we see cache-hits etc. but the cache rate is lower than expected. Analyzing it, i recognized that none of the request has an age above 3600. Although our max-age is set to 86400. Setting it to something smaller works. Is this defined behaviour? Are we setting anything wrong?
Here are the headers for one of the files:
HTTP/2 200
x-guploader-uploadid: AEnB2Ur4sV1ou6Av1U8OgQC8iNxgFmLzAbQ4bFQ4mBAYCyBOHviUAfAbkWFUycAUGLYDYbgNSdaw_zdkE6ySLdRTe0vScOh3Tw
date: Wed, 05 Sep 2018 14:40:29 GMT
expires: Thu, 06 Sep 2018 14:40:29 GMT
last-modified: Thu, 02 Mar 2017 15:31:23 GMT
etag: "1293d749638a24bf786a15f2a2a6ca76"
x-goog-generation: 1488468683233688
x-goog-metageneration: 3
x-goog-stored-content-encoding: identity
x-goog-stored-content-length: 89976
content-type: text/plain
x-goog-hash: crc32c=nIbPdQ==
x-goog-hash: md5=EpPXSWOKJL94ahXyoqbKdg==
x-goog-storage-class: STANDARD
accept-ranges: bytes
content-length: 89976
access-control-allow-origin: *
access-control-expose-headers: x-unity-version
access-control-expose-headers: origin
server: UploadServer
age: 3041
cache-control: public, max-age=86400
alt-svc: clear
According to this Cloud CDN Documentation,
Note that a cache entry's expiration time is an upper bound on how long the cache entry remains valid. There is no guarantee that a cache entry will remain in the cache until it expires. Cache entries for unpopular content can be evicted to make room for new content. Regardless of the specified expiration time, cache entries that aren't accessed for 30 days are automatically removed.
That being said, we were able to reproduce the same behavior. Hence,
in order to confirm if it is indeed an expected behavior or an issue, I have created a new issue on the Google Issue Tracker for your convenience.
Related
I was trying to change storage class of a set of existing objects (over 300 GBs) as advised in this post:
I tried it on one file first:
fyn#pod-arch:~$ gsutil ls -L gs://some-bucket/sub-dir/audioArch.mp3
gs://some-bucket/sub-dir/audioArch.mp3:
Creation time: Fri, 29 Jul 2016 00:52:51 GMT
Update time: Fri, 05 Aug 2016 15:40:51 GMT
Storage class: DURABLE_REDUCED_AVAILABILITY
Content-Language: en
Content-Length: 43033404
Content-Type: audio/mpeg
... ...
fyn#pod-arch:~$ gsutil -m rewrite -s coldline gs://some-bucket/sub-dir/audioArch.mp3
- [1/1 files][ 41.0 MiB/ 41.0 MiB] 100% Done
Operation completed over 1 objects/41.0 MiB.
fyn#pod-arch:~$ gsutil ls -L gs://some-bucket/sub-dir/audioArch.mp3
gs://some-bucket/sub-dir/audioArch.mp3:
Creation time: Sun, 30 Oct 2016 23:49:34 GMT
Update time: Sun, 30 Oct 2016 23:49:34 GMT
Storage class: COLDLINE
Content-Language: en
Content-Length: 43033404
Content-Type: audio/mpeg
... ...
Then I tried it on 15 more, and then on the rest of the objects in a subdir... Works like a charm ☺, although the operation overwrites the Creation & Update times!
I had two follow-up queries though:
Is gsutil rewrite operation billable?
Can Creation time be preserved?
Many thanks.
Cheers!
fynali
Yes, it is billable as a Class A operation (it uses
storage.objects.rewrite, see cloud.google.com/storage/pricing). No,
there's no way to preserve the creation/update time because rewrite
creates a new object generation.
–Travis Hobrla in comment here
I'm using [Net::Google::Drive::Simple][1] to fetch a document from the web and place it on my Google Drive. The script was working fine until I recently started getting the following error: Token refresh failed at /usr/local/share/perl/5.20.2/OAuth/Cmdline.pm line 76.
Printing out the headers from the response from Google shows the following:
HTTP/1.1 400 Bad Request
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Connection: close
Date: Mon, 24 Oct 2016 08:32:13 GMT
Pragma: no-cache
Accept-Ranges: none
Server: GSE
Vary: Accept-Encoding
Content-Type: application/json; charset=utf-8
Expires: Mon, 01 Jan 1990 00:00:00 GMT
Alt-Svc: quic=":443"; ma=2592000; v="36,35,34"
Client-Date: Mon, 24 Oct 2016 08:32:13 GMT
Client-Peer: 2607:f8b0:4006:80e::200d:443
Client-Response-Num: 1
Client-SSL-Cert-Issuer: /C=US/O=Google Inc/CN=Google Internet Authority G2
Client-SSL-Cert-Subject: /C=US/ST=California/L=Mountain View/O=Google Inc/CN=accounts.google.com
Client-SSL-Cipher: ECDHE-RSA-AES128-SHA
Client-SSL-Socket-Class: IO::Socket::SSL
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
{
"error" : "invalid_grant"
}
First thing to check:
Your server’s clock is not in sync with NTP. (Solution: check the server time if its incorrect fix it. )
Second thing to check:
Your refresh token is not valid you need a new one. Authenticate your code again.
Possible reasons for a refresh token to no longer work.
User revoked it in their Google account.
Refresh token hasn't been used to get a new access token in six months.
Maximum number of refresh tokens for a user reached: If a user authenticates your application you get a refresh token associated with the user who authenticated it. If the user runs it again and you get a second refresh token. You can have up to 25 outstanding refresh tokens for each user at which point the first one given will expire and no longer work. This is why its important to always save the most resent refresh token for a user.
I'm trying to upload a big file (9gb) to google storage using Cyberduck.
The login and transfer with small files work. However for this file I'm getting the following error:
GET / HTTP/1.1
Date: Wed, 30 Apr 2014 08:47:34 GMT
x-goog-project-id: 674064471933
x-goog-api-version: 2
Authorization: OAuth SECRET_KEY
Host: storage.googleapis.com:443
Connection: Keep-Alive
User-Agent: Cyberduck/4.4.4 (Mac OS X/10.9) (x86_64)
HTTP/1.1 200 OK
Content-Type: application/xml; charset=UTF-8
Content-Length: 340
Date: Wed, 30 Apr 2014 08:47:35 GMT
Expires: Wed, 30 Apr 2014 08:47:35 GMT
Cache-Control: private, max-age=0
Server: HTTP Upload Server Built on Apr 16 2014 16:50:43 (1397692243)
Alternate-Protocol: 443:quic
GET /vibetracestorage/?prefix=eventsall.csv&uploads HTTP/1.1
Date: Wed, 30 Apr 2014 08:47:35 GMT
x-goog-api-version: 2
Authorization: OAuth SECRET_KEY
Host: storage.googleapis.com:443
Connection: Keep-Alive
User-Agent: Cyberduck/4.4.4 (Mac OS X/10.9) (x86_64)
HTTP/1.1 400 Bad Request
Content-Type: application/xml; charset=UTF-8
Content-Length: 173
Date: Wed, 30 Apr 2014 08:47:36 GMT
Expires: Wed, 30 Apr 2014 08:47:36 GMT
Cache-Control: private, max-age=0
Server: HTTP Upload Server Built on Apr 16 2014 16:50:43 (1397692243)
Alternate-Protocol: 443:quic
Am I missing anything? Thanks.
According to that log you posted, you're placing a GET to "https://storage.googleapis.com/vibetracestorage/?prefix=eventsall.csv&uploads".
I don't know what that "uploads" parameter tacked onto the end is, but it's not a valid parameter for requesting a bucket listing (which is what that request does).
If you place that request by hand, you'll see this error:
<?xml version='1.0' encoding='UTF-8'?><Error><Code>InvalidArgument</Code><Message>Invalid argument.</Message><Details>Invalid query parameter(s): [uploads]</Details></Error>
Also, as a general point of good practice, do not post logs that contain your full Authorization header. That is a very, very bad idea. You may want to delete this question, although those credentials will expire (and perhaps already have).
This is an interoperability issue. In Cyberduck, when connected to S3, multipart uploads are supported as defined by Amazon S3. The request with the uploads parameter is used to find already existing in progress multipart uploads for the same target object that can be resumed.
Make sure to choose Google Storage and not S3 in the protocol dropdown list in the bookmark or connection prompt. Multipart uploads should then be disabled.
Background info: Some of my website innerpages had malware on it last week. The whole site has been reset, updated en cleared from malware a couple days ago. The website is SE optimized, but not spammed at all.
Q: Some innerpages of the website have suddenly dropped from the Google results.
Searching site: http://www.domain.com/innerpage doesn't even give a result.
Searching cache: http://www.domain.com/innerpage has no results since today
Webmaster page error: The page seems to redirect to itself. This may result in an infinite redirect loop. Please check the Help Center article about redirects.
HTTP/1.1 301 Moved Permanently
Date: Mon, 28 Oct 2013 20:15:18 GMT
Server: Apache
X-Powered-By: PHP/5.3.21
Set-Cookie: PHPSESSID=170fc3f0740f2eb26ed661493992843a; path=/
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
X-Pingback: http://www.domain.com/xmlrpc.php
Location: http://www.domain.com/innerpage/
Vary: Accept-Encoding,User-Agent
Content-Encoding: gzip
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: text/html; charset=UTF-8
The .htaccess file looks fine too. Do you guys have any idea what going on here?
The page is w3c valid and even online googlebot simulators show state 200 OK.
our sports website provides an ICS calendar for our matches, draws etc.
For retrieving the ICS file we use a PHP script which reads the local ics file and then does some optional filtering on the VEVENT records etc. and returns the ICS data.
I have subscribed this ICS webcal via webcal://.... on my iPhone.
I now have the weird behaviour that SOME whole day events (DURATION:P1D) like this
BEGIN:VEVENT
DTSTART;VALUE=DATE:20120623
DURATION:P1D
TRANSP:TRANSPARENT
SUMMARY:Auslosung: VWM: Super Globe
DESCRIPTION:VWM: Super Globe
UID:20110124#thw-provinzial.de
CATEGORIES:THW-Termin
URL:http://www.thw-provinzial.de/thw/
COMMENT:TYPE=VWM
END:VEVENT
span two days in my iPhone Calendar if I subscribe the PHP script via webcal://www.thw-provinzial.de/thw/ics.php?config=all?.
(It is shown on 20120623 and 20120624)
If I subscribe the ics file directly via http://www.thw-provinzial.de/thw/thwdate2.ics the event is shown correctly only on day 20120623.
If I do a
wget http://www.thw-provinzial.de/thw/thwdate2.ics
wget http://www.thw-provinzial.de/thw/ics.php?config=all
and then diff the two outputs the only difference is the X-WR-CALNAME all other content is identical.
Could it be that some header information in the response is confusing the iPhone?
Response header of the thwdate2.ics -here the behvaiour is fine
HTTP/1.0 200 OK
Date: XXXXXX
Server: Apache
Last-Modified: Wed, 13 Jun 2012 20:05:04 GMT
ETag: "6c6f78d-c54d-4c260194d7c00"
Accept-Ranges: bytes
Content-Length: 50509
Vary: Accept-Encoding,User-Agent
Content-Type: text/calendar
Age: 787
Response header of the ics.php - here we have the problem with spanning over 2 days
HTTP/1.0 200 OK
Date: Thu, XXXXXX
Server: Apache
Content-Disposition: inline; filename=thwdates.ics
Pragma: no-cache
Cache-Control: no-cache, must-revalidate
Expires: Sat, 26 Jul 1997 05:00:00 GMT
Last-Modified: Wed, 13 Jun 2012 20:05:04 GMT
Vary: Accept-Encoding,User-Agent
Content-Type: text/calendar; charset=utf-8
Any ideas?