Does Wildfly 9.* support range-request? - jboss

Supposedly range-request is supported in wildfly 9.x(undertow) server as mentioned here .. but is not working in my case as you can see below.
Does anyone knows if it really works on wildfly and how do i make it work?
curl -I http://localhost:8080/stairs.mp4
 
HTTP / 1.1 200 OK
Connection: keep-alive
Last-Modified: Mon, 09 Nov 2015 13:35:29 GMT
X-Powered-By: Undertow / 1
Server: WildFly / 9
Content-Type: video / mp4
Content-Length: 2890554
Date: Thu, 12 Nov 2015 17:29:53 GMT

Related

gsutil is stuck without copying any data

I am trying to upload files from compute engine instance (debian) to cloud storage.
at some point gsutil completely stopped working. When running it with -D flag I see those replies:
reply: 'HTTP/1.1 400 Bad Request\r\n'header: Content-Type: application/json; charset=UTF-8header: Content-Encoding: gzipheader: Date: Tue, 27 May 2014 21:09:47 GMTheader: Expires: Tue, 27 May 2014 21:09:47 GMTheader: Cache-Control: private, max-age=0header: X-Content-Type-Options: nosniffheader: X-Frame-Options: SAMEORIGINheader: X-XSS-Protection: 1; mode=blockheader: Server: GSEheader: Alternate-Protocol: 443:quicheader: Transfer-Encoding: chunkedprocess count: 1thread count: 10
Try setting parallel_composite_upload_threshold = 0 under the [GSUtil] section in your boto file.

Google Cloud Storage upload 10gb filesize error in Cyberduck

I'm trying to upload a big file (9gb) to google storage using Cyberduck.
The login and transfer with small files work. However for this file I'm getting the following error:
GET / HTTP/1.1
Date: Wed, 30 Apr 2014 08:47:34 GMT
x-goog-project-id: 674064471933
x-goog-api-version: 2
Authorization: OAuth SECRET_KEY
Host: storage.googleapis.com:443
Connection: Keep-Alive
User-Agent: Cyberduck/4.4.4 (Mac OS X/10.9) (x86_64)
HTTP/1.1 200 OK
Content-Type: application/xml; charset=UTF-8
Content-Length: 340
Date: Wed, 30 Apr 2014 08:47:35 GMT
Expires: Wed, 30 Apr 2014 08:47:35 GMT
Cache-Control: private, max-age=0
Server: HTTP Upload Server Built on Apr 16 2014 16:50:43 (1397692243)
Alternate-Protocol: 443:quic
GET /vibetracestorage/?prefix=eventsall.csv&uploads HTTP/1.1
Date: Wed, 30 Apr 2014 08:47:35 GMT
x-goog-api-version: 2
Authorization: OAuth SECRET_KEY
Host: storage.googleapis.com:443
Connection: Keep-Alive
User-Agent: Cyberduck/4.4.4 (Mac OS X/10.9) (x86_64)
HTTP/1.1 400 Bad Request
Content-Type: application/xml; charset=UTF-8
Content-Length: 173
Date: Wed, 30 Apr 2014 08:47:36 GMT
Expires: Wed, 30 Apr 2014 08:47:36 GMT
Cache-Control: private, max-age=0
Server: HTTP Upload Server Built on Apr 16 2014 16:50:43 (1397692243)
Alternate-Protocol: 443:quic
Am I missing anything? Thanks.
According to that log you posted, you're placing a GET to "https://storage.googleapis.com/vibetracestorage/?prefix=eventsall.csv&uploads".
I don't know what that "uploads" parameter tacked onto the end is, but it's not a valid parameter for requesting a bucket listing (which is what that request does).
If you place that request by hand, you'll see this error:
<?xml version='1.0' encoding='UTF-8'?><Error><Code>InvalidArgument</Code><Message>Invalid argument.</Message><Details>Invalid query parameter(s): [uploads]</Details></Error>
Also, as a general point of good practice, do not post logs that contain your full Authorization header. That is a very, very bad idea. You may want to delete this question, although those credentials will expire (and perhaps already have).
This is an interoperability issue. In Cyberduck, when connected to S3, multipart uploads are supported as defined by Amazon S3. The request with the uploads parameter is used to find already existing in progress multipart uploads for the same target object that can be resumed.
Make sure to choose Google Storage and not S3 in the protocol dropdown list in the bookmark or connection prompt. Multipart uploads should then be disabled.

Content-Type header lost when using custom URL

I'm hosting a FireFox plugin on Google Cloud Storage. In order to be properly handled by FireFox, the Content-Type needs to be set to application/x-xpinstall
I have uploaded as follows:
gsutil -h "Content-Type: application/x-xpinstall" cp -a public-read \
ActivityInfo.xpi gs://download.activityinfo.org
When accessed from the standard endpoint, everything is correct:
$ curl -s -D - http://commondatastorage.googleapis.com/download.activityinfo.org/ActivityInfo.xpi \
-o /dev/null
HTTP/1.1 200 OK
Server: HTTP Upload Server Built on Feb 13 2013 15:53:33 (1360799613)
Expires: Thu, 28 Feb 2013 12:38:30 GMT
Date: Thu, 28 Feb 2013 11:38:30 GMT
Last-Modified: Thu, 28 Feb 2013 11:38:01 GMT
ETag: "1ee983889c947a204eab4db6902c9a67"
x-goog-generation: 1362051481261000
x-goog-metageneration: 1
Content-Type: application/x-xpinstall
Content-Language: en
x-goog-crc32c: a11b93ab
Accept-Ranges: bytes
Content-Length: 5562
Cache-Control: public, max-age=3600, no-transform
Age: 491
But when I try to access from the custom domain download.activityinfo.org, the header reverts to application/octet-stream
$ curl -s -D - http://download.activityinfo.org/ActivityInfo.xpi -o /dev/null
HTTP/1.1 200 OK
Server: HTTP Upload Server Built on Feb 13 2013 15:53:33 (1360799613)
Expires: Thu, 28 Feb 2013 12:10:24 GMT
Date: Thu, 28 Feb 2013 11:10:24 GMT
Last-Modified: Wed, 27 Feb 2013 20:36:24 GMT
ETag: "1ee983889c947a204eab4db6902c9a67"
x-goog-generation: 1361997384772000
x-goog-metageneration: 2
Content-Type: application/octet-stream
x-goog-crc32c: a11b93ab
Accept-Ranges: bytes
Content-Length: 5562
Cache-Control: public, max-age=3600, no-transform
Age: 2298
I have set the CNAME to c.storage.googleapis.com per the docs
$ nslookup download.activityinfo.org
Non-authoritative answer:
Server: Comtrend.Home
Address: 192.168.1.1
Name: storage.l.googleusercontent.com
Addresses: 2a00:1450:400c:c00::80
173.194.78.128
Aliases: download.activityinfo.org
c.storage.googleapis.com
Is this a bug or do I need to change my configuration?
The two results above have different values in x-goog-generation and x-goog-metageneration, which makes me suspect you have uploaded the object more than once, and you were seeing the results from different versions (which have different values for Content-Type). Do you have versioning enabled for the bucket? If not, then maybe there is some caching going on in one of the paths. Are you still seeing this behavior?

zend can't send Last-Modified header

I try to send Last-Modified header. I can see it when run project on local computer but when I run copy of it on virtual host there is no Last-Modified header.
class InfoController extends Zend_Controller_Action
{
public function indexAction(){
$arr = strip_tags($this->_getParam('link'));
$material = new Application_Model_InlineMenus();
$mat = $material->preparematerial($arr);
$header= $this->getResponse()->setHeader("Last_Modified", gmdate("D, d M Y H:i:s", strtotime($mat['created']))." GMT", true);
//other parts of code
}
}
This is what I have in firebug when run project on local machine
Cache-Control no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Connection Keep-Alive
Content-Length 4563
Content-Type text/html
Date Fri, 15 Feb 2013 10:31:49 GMT
Expires Thu, 19 Nov 1981 08:52:00 GMT
Keep-Alive timeout=5, max=99
Last-Modified Thu, 14 Feb 2013 12:41:31 GMT
Pragma no-cache
Server Apache/2.2.22 (Win32) PHP/5.3.13
X-Powered-By PHP/5.3.13
And this is what I've got on my hosting
Cache-Control no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Connection keep-alive
Content-Encoding gzip
Content-Type text/html; charset=UTF-8
Date Fri, 15 Feb 2013 10:34:06 GMT
Expires Thu, 19 Nov 1981 08:52:00 GMT
Pragma no-cache
Server nginx/1.1.10
Transfer-Encoding chunked
X-Powered-By PHP/5.3.21
There is a single difference between local project and real project - I use .htaccess in www directiory to redirect requests to www/public directory.
UPD. I created plugin and in preDispatch() tried to set header and I've got http-code 500
Problem solved: disable SSI

Sporadic 502 error with gwt rpc calls

I have a GWT application that is giving sporadic 502 errors all of a sudden. I have managed to replicate it by opening multiple connections to the application. Eventually I get a 502 error and the response headers for look as follows:
Server: squid/2.6.STABLE5
Date: Fri, 19 Aug 2011 12:08:03 GMT
Content-Type: text/html
Content-Length: 1014
Expires: Fri, 19 Aug 2011 12:08:03 GMT
X-Squid-Error: ERR_ZERO_SIZE_OBJECT 0
X-Cache: MISS from sentinel.bsgza.bsg.co.za
X-Cache-Lookup: MISS from sentinel.bsgza.bsg.co.za:3128
Via: 1.0 sentinel.bsgza.bsg.co.za:3128 (squid/2.6.STABLE5)
Connection: close
The response headers for the successful rpc calls look like this:
Date: Fri, 19 Aug 2011 13:04:37 GMT
Server: Apache/2.2.14 (Ubuntu)
Content-Encoding: gzip
Content-Disposition: attachment
Content-Length: 249
Content-Type: application/json;charset=utf-8
X-Cache: MISS from sentinel.bsgza.bsg.co.za
X-Cache-Lookup: MISS from sentinel.bsgza.bsg.co.za:3128
Via: 1.0 sentinel.bsgza.bsg.co.za:3128 (squid/2.6.STABLE5)
Connection: keep-alive
We have been able to repeat this on a local server too so it is not a network issue
Try not to route your RPC call via a proxy (Squid). Or at least try to configure Squid to not try to cache them, but only to forward.
Update
It's suggested here that such condition might occur with HTTP POST (used by GWT-RPC) by clients behind PPPoA gateways (cable modems) which have wrong MTU set. Do you see this errors from such clients?