so I've successfully installed the newest google-cloud-sdk on my mac so all my gcloud and gsutil command line tools are up to date.
However, whenever I try a gsutil command, it times out. For example, when I run:
gsutil mb gs://cloud-storage-analysis
it starts to run, printing:
Creating gs://cloud-storage-analysis/...
But then it never stops. I stop it using Control-C and it prints out this.
Caught signal 2 - exiting
Traceback (most recent call last):
File "/Users/jazz/google-cloud-sdk/bin/bootstrapping/gsutil.py", line 71, in <module>
main()
File "/Users/jazz/google-cloud-sdk/bin/bootstrapping/gsutil.py", line 54, in main
'platform/gsutil', 'gsutil', *args)
File "/Users/jazz/google-cloud-sdk/bin/bootstrapping/bootstrapping.py", line 45, in ExecutePythonTool
execution_utils.ArgsForPythonTool(_FullPath(tool_dir, exec_name), *args))
File "/Users/jazz/google-cloud-sdk/bin/bootstrapping/bootstrapping.py", line 86, in _ExecuteTool
execution_utils.Exec(args + sys.argv[1:], env=_GetToolEnv())
File "/Users/jazz/google-cloud-sdk/bin/bootstrapping/../../lib/googlecloudsdk/core/util/execution_utils.py", line 146, in Exec
ret_val = p.wait()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 1357, in wait
pid, sts = _eintr_retry_call(os.waitpid, self.pid, 0)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 478, in _eintr_retry_call
return func(*args)
KeyboardInterrupt
The one time I let it run for a long time, it actually timed out and stopped. This happens for every gsutil command I try.
My bq commands work (but they're really slow).
I don't know what's wrong.
Thanks for any help.
Edit:
So my problem hasn't entirely gone away, but gsutil does sometimes work. It is very intermittent and seemingly random when it works and when it doesn't. It seems like refreshing the shell and/or quitting and reopening Terminal helps, but not every time. I'd still like to get to the bottom of it.
So as Misha suggested, I ran gsutil -D ls as a test.
It got to here and then stopped for a while (maybe 2-3 minutes): (some info has been [removed])
gsutil version: 4.13
checksum: [key] (OK)
boto version: 2.38.0
python version: 2.7.5 (default, Mar 9 2014, 22:15:05) [GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)]
OS: Darwin 13.4.0
multiprocessing available: True
using cloud sdk: True
config path: [path-to-home/.boto
gsutil path: [path-to-home]/google-cloud-sdk/platform/gsutil/gsutil
compiled crcmod: True
installed via package manager: False
editable install: False
Command being run: [path-to-home]/google-cloud-sdk/platform/gsutil/gsutil -o GSUtil:default_project_id=storagelogstest -D ls
config_file_list: ['[path-to-home]/.config/gcloud/legacy_credentials/[email]#gmail.com/.boto', '/.boto']
config: [('debug', '0'), ('working_dir', '/mnt/pyami'), ('https_validate_certificates', 'True'), ('debug', '0'), ('working_dir', '/mnt/pyami'), ('content_language', 'en'), ('default_api_version', '2'), ('default_project_id', 'storagelogstest')]
DEBUG 0626 10:43:29.972406 oauth2_client.py] GetAccessToken: checking cache for key 9886bbc6e7e67cf6d2b8b3a707046f2a326dcceb
DEBUG 0626 10:43:29.972712 oauth2_client.py] FileSystemTokenCache.GetToken: key=9886bbc6e7e67cf6d2b8b3a707046f2a326dcceb not present (cache_file=/var/folders/25/zs7lm5jd7dg5jljd4qdxjpnc0000gq/T/oauth2_client-tokencache.503.9886bbc6e7e67cf6d2b8b3a707046f2a326dcceb)
DEBUG 0626 10:43:29.972882 oauth2_client.py] GetAccessToken: token from cache: None
DEBUG 0626 10:43:29.973030 oauth2_client.py] GetAccessToken: fetching fresh access token...
INFO 0626 10:43:29.973551 client.py] Refreshing access_token
Then, it outputted this:
connect fail: (accounts.google.com, 443)
connect: (accounts.google.com, 443)
send: 'POST /o/oauth2/token HTTP/1.1\r\nHost: accounts.google.com\r\nContent-Length: 195\r\ncontent-type: application/x-www-form-urlencoded\r\naccept-encoding: gzip, deflate\r\nuser-agent: Python-httplib2/0.7.7 (gzip)\r\n\r\nclient_secret=ZmssLNjJy2998hD4CTg2ejr2&grant_type=refresh_token&refresh_token=1%2FUl4EXn6N5jPCjFVy6-U5HwIKNApkGmYEEQPZO654NxHBactUREZofsF9C7PrpE-j&client_id=32555940559.apps.googleusercontent.com'
reply: 'HTTP/1.1 200 OK\r\n'
header: Content-Type: application/json; charset=utf-8
header: Cache-Control: no-cache, no-store, max-age=0, must-revalidate
header: Pragma: no-cache
header: Expires: Fri, 01 Jan 1990 00:00:00 GMT
header: Date: Fri, 26 Jun 2015 17:44:45 GMT
header: Content-Disposition: attachment; filename="json.txt"; filename*=UTF-8''json.txt
header: Content-Encoding: gzip
header: P3P: CP="This is not a P3P policy! See http://www.google.com/support/accounts/bin/answer.py?hl=en&answer=151657 for more info."
header: X-Content-Type-Options: nosniff
header: X-Frame-Options: SAMEORIGIN
header: X-XSS-Protection: 1; mode=block
header: Server: GSE
header: Set-Cookie: NID=68=PfDga1cpnMr8ho-0tlBrWNhgLzQsThRzV31vn8QD1cV45H8C4-ydGoMI0ITI0lPJHvKhN_uPSisTQwIzM2LEFKqjXZlgsJ-9l0HiflLdl1UGMevAQ2GFxFqa369vQxZG;Domain=.google.com;Path=/;Expires=Sat, 26-Dec-2015 17:44:45 GMT;HttpOnly
header: Alternate-Protocol: 443:quic,p=1
header: Transfer-Encoding: chunked
DEBUG 0626 10:44:45.808030 oauth2_client.py] GetAccessToken: fresh access token: AccessToken(token=ya29.ngEY7SR5AbdDZ2pWMtBJzAnJGYGUgXB6hKcAwE8I274ieyLmEpuD1WypFJ8jAZN9LS5zCZ3ldGL4MA, expiry=2015-06-26 18:44:45.806928Z)
DEBUG 0626 10:44:45.808321 oauth2_client.py] FileSystemTokenCache.PutToken: key=9886bbc6e7e67cf6d2b8b3a707046f2a326dcceb, cache_file=/var/folders/25/zs7lm5jd7dg5jljd4qdxjpnc0000gq/T/oauth2_client-tokencache.503.9886bbc6e7e67cf6d2b8b3a707046f2a326dcceb
INFO 0626 10:44:45.813867 base_api.py] Calling method storage.buckets.list with StorageBucketsListRequest: <StorageBucketsListRequest
maxResults: 1000
project: 'storagelogstest'
projection: ProjectionValueValuesEnum(full, 0)>
INFO 0626 10:44:45.814872 base_api.py] Making http GET to https://www.googleapis.com/storage/v1/b?project=storagelogstest&fields=nextPageToken%2Citems%2Fid&alt=json&projection=full&maxResults=1000
INFO 0626 10:44:45.815298 base_api.py] Headers: {'accept': 'application/json',
'accept-encoding': 'gzip, deflate',
'content-length': '0',
'user-agent': 'apitools gsutil/4.13 (darwin) Cloud SDK Command Line Tool 0.9.66'}
INFO 0626 10:44:45.815390 base_api.py] Body: (none)
DEBUG 0626 10:45:45.846443 http_wrapper.py] Caught socket error, retrying: timed out
It tried to reconnect a few times but it timed out each time and then I stopped it.
I have not tried this on another machine, but I have talked to a coworker who has the same problem (frequently it won't work but some times it does).
So, if you have this problem, most likely you'll be able to get gsutil to work if you basically just try it a lot. Try restarting the shell (exec -l $SHELL) and quitting/reopening the command line and keep trying, it eventually worked for me. This is not a permanent fix, it still times out about 2/3 the time for me. But you'll at least be able to run your commands.
Hopefully Google can address this problem
Related
Edit: it seems the issue was caused by a dropped cookie. There should have been a session id cookie as well.
For posterity, here's the original question
When sending a request formed as this
GET https://<url>?<parameters>
Cache-Control: max-age=0
Connection: keep-alive
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Charset: iso-8859-1,utf-8,UTF-8
Accept-Encoding: gzip, x-gzip, deflate, x-bzip2
Accept-Language: en-US,en;q=0.5
If-None-Match: "6eb7d55abfd0546399e3245ad3a76090"
User-Agent: Mozilla/5.0 libwww-perl/6.13
Cookie: auth_token=<blah>; __cfduid=<blah>
Cookie2: $Version="1"
I receive the following response
response-type: text/html
charset=utf-8
HTTP/1.1 406 Not Acceptable
Cache-Control: no-cache
Connection: keep-alive
Date: Fri, 12 Feb 2016 18:34:00 GMT
Server: cloudflare-nginx
Content-Type: text/html; charset=utf-8
CF-RAY: 273a62969a9b288e-SJC
Client-Date: Fri, 12 Feb 2016 18:34:00 GMT
Client-Peer: <IP4>:443
Client-Response-Num: 10
Client-SSL-Cert-Issuer: /C=GB/ST=Greater Manchester/L=Salford/O=COMODO CA Limite
d/CN=COMODO ECC Domain Validation Secure Server CA 2
Client-SSL-Cert-Subject: /OU=Domain Control Validated/OU=PositiveSSL Multi-Domai
n/CN=ssl<blah>.cloudflaressl.com
Client-SSL-Cipher: <some value>
Client-SSL-Socket-Class: IO::Socket::SSL
Client-SSL-Warning: Peer certificate not verified
Client-Transfer-Encoding: chunked
Status: 406 Not Acceptable
X-Runtime: 9
I'm not entirely sure why the response is 406 Not Acceptable. When
downloaded with firefox, the file in question in 996 KB (as reported
by Windows 8.1's explorer). It looks like I have a partially
transferred file from my perl script at 991 KB (again, windows
explorer size), so it got MOST of the file before throwing the Not
Acceptable response. Using the same URL pattern and request style, I
was able to successfully download a 36 MB file from the server with
this perl library and request form, so the size of the file should not
be magically past some max (chunk) size. As these files are being
updated on approximately 15-minute intervals, I suppose it's possible
that a write was performed on the server, invalidating the ETag before
all chunks were complete on this file?
I tried adding chunked to Accept-Encoding, but that's not for
Transfer encoding and it appears to have no affect on the server's behavior. Additionally, as I've been able to download larger files
(same format) from the same server, that alone shouldn't be the cause
of my woes. LWP is supposed to be able to handle chunked data
returned by a response to GET (as per this newsgroup post).
The server in question is running nginx with Rack::Lint. The
particular server configuration (which I in no way control), throws
500 errors on its own attempts to send 304: not modified. This
caused me to write a workaround for File::Download (sub
lintWorkAround here), so I'm not above putting blame on the
server in this instance also, if warranted. I don't believe I buggered
up the chunk-handling code from File::Download 0.3 (see diff),
but I suppose that's also possible. Is it possible to request a
particular chunk size from the server?
I'm using LWP and libwww versions 6.13 in perl 5.18.2.
File::Download version is my own 0.4_050601.
So, what else could the 406 error mean? Is there a way to request that
the server temporarily cache/version control the entire file so that I
can download a given ETag'd file once the transfer begins?
I am writing some Elisp code that downloads files using url-copy-file, and most of the time it works fine, but sometimes the contents of the file end up being the http headers, e.g. the downloaded file has the following contents:
HTTP/1.1 200 OK
Server: GitHub.com
Date: Thu, 14 Nov 2013 20:54:41 GMT
Content-Type: text/plain; charset=utf-8
Transfer-Encoding: chunked
Status: 200 OK
Strict-Transport-Security: max-age=31536000
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
X-UA-Compatible: chrome=1
Access-Control-Allow-Origin: https://render.github.com
Or sometimes the following is appended to the end of an otherwise correctly-downloaded file:
0
- Peer has closed the GnuTLS connection
However, when these things occur, the function seems to return just fine, so there is no way for me to verify that the file has really been downloaded. Is there any more reliable way to download a file in elisp (without shelling out to wget/curl/whatever)?
As recommended, I have reported this as a bug: lists.gnu.org/archive/html/bug-gnu-emacs/2013-11/msg00758.html
I am not sure whether netcat has some command for requesting http header or should I use sed to filter the result? If sed is used in this case, is it so that I only have to extract everything before the first "<" occurs ?
The command line tool curl has the -I option which only gets the headers:
-I/--head
(HTTP/FTP/FILE) Fetch the HTTP-header only! HTTP-servers feature the command HEAD which this uses to get nothing but the header of a document. When used on a FTP or FILE file, curl displays the file size and last modification time only.
Demo:
$ curl -I stackoverflow.co.uk
HTTP/1.1 200 OK
Cache-Control: private
Content-Length: 503
Content-Type: text/html; charset=utf-8
Server: Microsoft-IIS/7.5
X-AspNet-Version: 2.0.50727
X-Powered-By: ASP.NET
Date: Thu, 26 Sep 2013 21:06:15 GMT
I wrote a test plan on Windows 7. I remotely started a test on two machines, both Windows Vista. A problem came up when I tried do same thing on Linux - I used the same test plan.
I can login a group of users and simulate their behaviour, but when I try to log them out nothing happens.
On windows, they are logout but linux gives me empty response data. Listeners show green status so I'm rather confused to what's going on. Should I change something in properties or is it problem with my script?
EDIT:
Script:
Login user using authorization data. every user gets different JSESSIONID.
Simulate user behaviour using Access Log Sampler.
Logout user.
On Windows, everything works fine login and logout. Listener shows: sample result, request data and response data for every sample.
On linux response data is blank for every sample.
Examples of Sample result for
windows and
linux
Request Data is the same for both.
Response data for linux is blank.
EDIT2:
Test Plan
setUP Thread Group
Clean cache server
Clean file with JSESSIONID
Thread Group
HTTP Request Defaults
Login (once only controller)
Acces Log Sampler
using beanshell script i save JSESSIONID (cookie variable) to file
Cookie Manager
tearDown Thread Group
HTTP Request Defaults
read JSESSIONID from file
logout all users
Cookie Manager
result tree
Summary report
Logout must be performed after all samples from access log are done. That's why i save JSESSIONID to file to share same session between thread group.
Ok somehow I eliminate error with response. Apparently there was a problem with java version on linux server.
Current problem is that when i start remotely script on Linux it doesn't follow redirect. The same script on win XP or Vista follow redirects and user is logout.
Exp.
GET connection.rpc?logout=D5D076123FD6CCBF137FE1673F531006
On Windows I get two redirections and user is logout.
Thread Name: Logout 1-1
Sample Start: 2013-05-18 13:50:52 CEST
Load time: 15
Latency: 13
Size in bytes: 777
Headers size in bytes: 573
Body size in bytes: 204
Sample Count: 1
Error Count: 0
Response code: 200
Response message: OK
Response headers:
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
X-wkpl-server-name: OnlineRC2
Content-Type: text/html;charset=UTF-8
Content-Language: en-US
Content-Length: 204
Date: Sat, 18 May 2013 11:50:43 GMT
HTTPSampleResult fields:
ContentType: text/html;charset=UTF-8
DataEncoding: UTF-8
Thread Name:
Sample Start: 2013-05-18 13:50:52 CEST
Load time: 13
Latency: 13
Size in bytes: 374
Headers size in bytes: 374
Body size in bytes: 0
Sample Count: 1
Error Count: 0
Response code: 302
Response message: Moved Temporarily
Response headers:
HTTP/1.1 302 Moved Temporarily
Server: Apache-Coyote/1.1
Set-Cookie: JSESSIONID=6D3F7A3774ABB1411A5F8E1744004A71; Path=/WKPLOnline
CacheControl: no-cache
Pragma: no-cache, no-store
Expires: -1
Location: connection.rpc?logout=BE8C04D8538641675A8BFD2490CDDD4D
Content-Length: 0
Date: Sat, 18 May 2013 11:50:43 GMT
Thread Name: Logout 1-1
HTTPSampleResult fields:
ContentType:
DataEncoding: null
Sample Start: 2013-05-18 13:50:52 CEST
Load time: 2
Latency: 2
Size in bytes: 403
Headers size in bytes: 199
Body size in bytes: 204
Sample Count: 1
Error Count: 0
Response code: 200
Response message: OK
Response headers:
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
X-wkpl-server-name: OnlineRC2
Content-Type: text/html;charset=UTF-8
Content-Language: en-US
Content-Length: 204
Date: Sat, 18 May 2013 11:50:43 GMT
HTTPSampleResult fields:
ContentType: text/html;charset=UTF-8
DataEncoding: UTF-8
On Linux I don't get redirects and user is not logout.
Thread Name: Logout 1-1
Sample Start: 2013-05-18 13:51:48 CEST
Load time: 18
Latency: 18
Size in bytes: 264
Headers size in bytes: 243
Body size in bytes: 21
Sample Count: 1
Error Count: 0
Response code: 200
Response message: OK
Response headers:
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Set-Cookie: JSESSIONID=D17A4ABCDE7FB90C1DF702BDCB3827D7; Path=/WKPLOnline
CacheControl: no-cache
Pragma: no-cache, no-store
Expires: -1
Content-Length: 21
Date: Sat, 18 May 2013 11:51:53 GMT
HTTPSampleResult fields:
ContentType:
DataEncoding: null
It is strange because during authorization there are a few redirects and linux performs them correctly.
You should check that jmeter accesses correctly your jsessionid file on linux:
check path is ok (no )
check read access
If you are using distributed testing, issue may be that file is not found by agent or some file is overwritten by another agent
Problem is solve, yupi :)
It turned out that the target server has set the lock for some machines. On such a machine was a Linux. That is why I could not log users in a separate thread.
Therefore, if someone encounters a similar problem (from one machine request are handled correctly from another are not), he should check if his machine have correct permission in my case i needed to do correct entry in adm.list in test server.
Hi I'm experiencing a super weird problem.
Whenever I post links to my website on Facebook, they come up as Forbidden.
The site itself works great and I have no seen this when linking on other sites.
Could this be a server misconfiguration? Any thoughts on where to look?
here's some Info:
I have a dedicated server running WHM 11.25.0
i have 2 sites hosted here using cPanel 11.25.0
the error msg:
Forbidden You don't have
permission to access
/blog/deepwater-horizon-11/ on this
server. Additionally, a 404
Not Found error was encountered while
trying to use an ErrorDocument to
handle the request.
Apache/2.2.14 (Unix)
mod_ssl/2.2.14 OpenSSL/0.9.8i DAV/2
mod_auth_passthrough/2.1
mod_bwlimited/1.4 FrontPage/5.0.2.2635
Server at www.offshoreinjuries.com
Port 80
UPDATE:
Here is a sample link if it helps. (notice going the linked page directly works fine)
http://www.facebook.com/l.php?u=http%3A%2F%2Fwww.offshoreinjuries.com%2Fblog%2Fdeepwater-horizon-11%2F&h=834ea
UPDATE and ANSWER:
Found the issue and added a complete answer below.
You must have a rule somewhere that reads the HTTP_REFERER and rejects incoming links from Facebook. Seriously. This is what happens between the lines:
No referrer
telnet www.offshoreinjuries.com 80
HEAD /blog/deepwater-horizon-11/ HTTP/1.1
Host: www.offshoreinjuries.com
HTTP/1.1 200 OK
Date: Fri, 28 May 2010 09:19:45 GMT
Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8i DAV/2 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635
X-Powered-By: PHP/5.2.12
X-Pingback: http://www.offshoreinjuries.com/blog/xmlrpc.php
Content-Type: text/html; charset=UTF-8
200 OK, good.
Facebook referrer
telnet www.offshoreinjuries.com 80
HEAD /blog/deepwater-horizon-11/ HTTP/1.1
Host: www.offshoreinjuries.com
Referer: http://www.facebook.com/l.php?u=http%3A%2F%2Fwww.offshoreinjuries.com%2Fblog%2Fdeepwater-horizon-11%2F&h=834ea
HTTP/1.1 403 Forbidden
Date: Fri, 28 May 2010 09:21:04 GMT
Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8i DAV/2 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635
Content-Type: text/html; charset=iso-8859-1
403 Forbidden, bad.
Any other referrer
telnet www.offshoreinjuries.com 80
HEAD /blog/deepwater-horizon-11/ HTTP/1.1
Host: www.offshoreinjuries.com
Referer: http://alvaro.es/
HTTP/1.1 200 OK
Date: Fri, 28 May 2010 09:20:36 GMT
Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8i DAV/2 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635
X-Powered-By: PHP/5.2.12
X-Pingback: http://www.offshoreinjuries.com/blog/xmlrpc.php
Content-Type: text/html; charset=UTF-8
200 OK again.
Your server is actively rejecting visitors from Facebook.
I was finally able to get to the bottom of this behavior.
The default mod_security settings of my host, HostGator include a set of whitelists and blacklists. Upon inspecting these I found .facebook.com/l.php blacklisted.
l.php is a wrapper page that provides a warning that you are leaving facebook. As I understand it since this can be easily exploited, HostGator chose to essentially blacklist all outbound facebook links.
I fixed my problem by removing .facebook.com/l.php from the mod_security blacklist, however I could have also just reset my mod_security settings to Default (vs the HostGator config) via a single click in WHM.