Google cloud storage: Java.lang.RuntimeException: Server replied with 400, Missing project id - google-cloud-storage

I have created an app which uses the google cloud storage client lib to allow users to upload a file to the cloud.
I have created a bucket and added my project account to its ACL, like the following:
Also, my project has Enable billing. however, after I deploy my app and tried to select file to upload it to google cloud, it gives me the following Exception:
/StdPersistentStorageClient
com.google.appengine.tools.cloudstorage.NonRetriableException: java.lang.RuntimeException: Server replied with 400, probably bad request: Request: PUT https://storage.googleapis.com/persistentstoragebucket/?upload_id=AEnB2Uq7SzQtkKjLb4mPdZiBlKJokLcXnn9R-wcdQzHphk5EsWwePwLU22u0aUP1Z9MFN28kIwoKvNxjfVIvMr5CO0YgjI9ihQ
User-Agent: App Engine GCS Client
Content-Range: bytes */0
no content
Response: 400 with 154 bytes of content
Content-Type: application/xml; charset=UTF-8
Content-Length: 154
Vary: Origin
Date: Fri, 09 Jan 2015 22:44:58 GMT
Server: UploadServer ("Built on Dec 19 2014 10:24:45 (1419013485)")
Alternate-Protocol: 443:quic,p=0.02
X-Google-Cache-Control: remote-fetch
Via: HTTP/1.1 GWA
<?xml version='1.0' encoding='UTF-8'?><Error><Code>InvalidArgument</Code><Message>Invalid argument.</Message><Details>Missing project id</Details></Error>
at com.google.appengine.tools.cloudstorage.RetryHelper.doRetry(RetryHelper.java:120)
at com.google.appengine.tools.cloudstorage.RetryHelper.runWithRetries(RetryHelper.java:166)
at com.google.appengine.tools.cloudstorage.RetryHelper.runWithRetries(RetryHelper.java:156)
at com.google.appengine.tools.cloudstorage.GcsOutputChannelImpl.close(GcsOutputChannelImpl.java:198)
at java.nio.channels.Channels$1.close(Channels.java:178)
at fci.cu.std.paas.api.xml.manifest.utilities.ManifestUtilities.copy(ManifestUtilities.java:83)
at fci.cu.std.paas.core.services.persistent.storage.PersistentStorageService.uploadBlob(PersistentStorageService.java:130)
at fci.cu.std.paas.client.services.persistent.storage.StdPersistentStorageClient.doPost(StdPersistentStorageClient.java:102)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:637)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1166)
at com.google.apphosting.utils.servlet.ParseBlobUploadFilter.doFilter(ParseBlobUploadFilter.java:125)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157)
at com.google.apphosting.runtime.jetty.SaveSessionFilter.doFilter(SaveSessionFilter.java:35)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157)
at com.google.apphosting.utils.servlet.JdbcMySqlConnectionCleanupFilter.doFilter(JdbcMySqlConnectionCleanupFilter.java:60)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157)
at com.google.apphosting.utils.servlet.TransactionCleanupFilter.doFilter(TransactionCleanupFilter.java:43)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157)
at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:388)
at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:418)
at com.google.apphosting.runtime.jetty.AppVersionHandlerMap.handle(AppVersionHandlerMap.java:254)
at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:923)
at com.google.apphosting.runtime.jetty.RpcRequestParser.parseAvailable(RpcRequestParser.java:76)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at com.google.apphosting.runtime.jetty.JettyServletEngineAdapter.serviceRequest(JettyServletEngineAdapter.java:146)
at com.google.apphosting.runtime.JavaRuntime$RequestRunnable.run(JavaRuntime.java:484)
at com.google.tracing.TraceContext$TraceContextRunnable.runInContext(TraceContext.java:438)
at com.google.tracing.TraceContext$TraceContextRunnable$1.run(TraceContext.java:445)
at com.google.tracing.CurrentContext.runInContext(CurrentContext.java:220)
at com.google.tracing.TraceContext$AbstractTraceContextCallback.runInInheritedContextNoUnref(TraceContext.java:309)
at com.google.tracing.TraceContext$AbstractTraceContextCallback.runInInheritedContext(TraceContext.java:301)
at com.google.tracing.TraceContext$TraceContextRunnable.run(TraceContext.java:442)
at com.google.apphosting.runtime.ThreadGroupPool$PoolEntry.run(ThreadGroupPool.java:251)
at java.lang.Thread.run(Thread.java:724)
could you plz help me in this issue

Well, you didn't post your code, so it's hard to be sure, but you're doing a PUT to "https://storage.googleapis.com/persistentstoragebucket/?upload_id=...". "persistantstoragebucket" is a bucket name, but there should be an object name after that. With just the bucket name, it looks like a bucket creation request, which requires a project ID (thus your error).
Are you perhaps not specifying the name of the object you're trying to upload?

Related

Kerberos doesn't work, no token in response header

We are trying to setup kerberos, initially we had to initialize with kinit for the authentication to work. We have created our principals like everyone else on the team. Now all of a sudden three users are not able to get their kerberos working. Because we are all developers our machines needs to act as servers, so we have our principals created for every machines.
The weird thing is it worked for everyone at the beginning, now it is working only for few. We are able to see our keytab names in klist
This is how we created the keytabs
C:\Windows\system32>ktpass -princ HTTP/<complete system name>#<domain>
-pass <password> -mapuser <keytab_filename>#<domain> -ptype krb
5_nt_principal -kvno 0 -out c:\keytabs\<keytab_filename>Targeting domain controller: <domain server>.<domain>
Successfully mapped HTTP/<complete system name> to <keytab_filename>.
Password succesfully set!
Key created.
Output keytab to c:\keytabs\<keytab_filename>:
Keytab version: 0x502
keysize 84 HTTP/<complete_system_name>#<domain> ptype 1 (KRB5_NT_PR
INCIPAL) vno 0 etype 0x17 (RC4-HMAC) keylength 16 (some hash number)
The only difference I can see (from the kerberos working machine to the non-working machines) is that the response headers are having authorization with negotiate but response headers are not responding with a token. We are not able to figure out what the issue is.
Pragma: no-cache
Connection: keep-alive
Content-Length: 71
Cache-Control: no-cache, no-store, must-revalidate
Content-Type: text/html;charset=UTF-8
Date: Fri, 30 Jun 2017 20:18:06 GMT
Expires: 0
Server: JBoss-EAP/7
WWW-Authenticate: Negotiate
X-Powered-By: Undertow/1
I made sure that the browser is using kerberos with this
Any help is greatly appreciated.
My application was missing the jboss security negotiation dependency in the web module.
<jboss-deployment-structure>
<deployment>
<dependencies>
<module name="org.jboss.security.negotiation"/>
</dependencies>
</deployment>
</jboss-deployment-structure>
Once this dependency was added, the kerberos ticket started to appear in the request and responses

Kubernetes services communication by REST: Stream ended unexpectedly

There are two services deployed into Kubernetes cluster. Service_1 exposes REST API, and one part of it is the method for file content uploading, so POST request with "Content-Type: multipart/form-data" is used.
The sample of real request, which is sending from Service_2 is:
Request DefaultFullHttpRequest(decodeResult: success, version: HTTP/1.1,
content: UnpooledHeapByteBuf(freed))
POST /engine-rest/deployment/create HTTP/1.1
Accept: application/json
User-Agent: process
Content-Type: multipart/form-data; boundary=28319d96a8c54b529aa9159ad75edef9
Content-Length: 4028
Host: service.cluster.ip:8080
The request cannot be processed and failed with an exception:
30-Mar-2017 18:17:29.623 WARNING [http-nio-8080-exec-2] org.camunda.bpm.engine.rest.exception.RestExceptionHandler.toResponse org.camunda.bpm.engine.rest.exception.RestException: multipart/form-data cannot be processed
at org.camunda.bpm.engine.rest.mapper.MultipartPayloadProvider.parseRequest(MultipartPayloadProvider.java:93)
at org.camunda.bpm.engine.rest.mapper.MultipartPayloadProvider.readFrom(MultipartPayloadProvider.java:71)
at org.camunda.bpm.engine.rest.mapper.MultipartPayloadProvider.readFrom(MultipartPayloadProvider.java:49)
at org.jboss.resteasy.core.interception.MessageBodyReaderContextImpl.proceed(MessageBodyReaderContextImpl.java:105)
at org.jboss.resteasy.plugins.interceptors.encoding.GZIPDecodingInterceptor.read(GZIPDecodingInterceptor.java:63)
at org.jboss.resteasy.core.interception.MessageBodyReaderContextImpl.proceed(MessageBodyReaderContextImpl.java:108)
at org.jboss.resteasy.core.MessageBodyParameterInjector.inject(MessageBodyParameterInjector.java:169)
at org.jboss.resteasy.core.MethodInjectorImpl.injectArguments(MethodInjectorImpl.java:136)
at org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:159)
at org.jboss.resteasy.core.ResourceMethod.invokeOnTarget(ResourceMethod.java:257)
at org.jboss.resteasy.core.ResourceMethod.invoke(ResourceMethod.java:222)
at org.jboss.resteasy.core.ResourceLocator.invokeOnTargetObject(ResourceLocator.java:159)
at org.jboss.resteasy.core.ResourceLocator.invoke(ResourceLocator.java:92)
at org.jboss.resteasy.core.SynchronousDispatcher.getResponse(SynchronousDispatcher.java:542)
at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:524)
at org.jboss.resteasy.core.SynchronousDispatcher.invokePropagateNotFound(SynchronousDispatcher.java:169)
at org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.service(ServletContainerDispatcher.java:212)
at org.jboss.resteasy.plugins.server.servlet.FilterDispatcher.doFilter(FilterDispatcher.java:59)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.camunda.bpm.engine.rest.filter.CacheControlFilter.doFilter(CacheControlFilter.java:41)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:219)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:106)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:502)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:142)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79)
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:617)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:518)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1091)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:668)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1527)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1484)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.commons.fileupload.MultipartStream$MalformedStreamException: Stream ended unexpectedly
at org.apache.commons.fileupload.MultipartStream.readHeaders(MultipartStream.java:538)
at org.apache.commons.fileupload.FileUploadBase$FileItemIteratorImpl.findNextItem(FileUploadBase.java:999)
at org.apache.commons.fileupload.FileUploadBase$FileItemIteratorImpl.<init>(FileUploadBase.java:965)
at org.apache.commons.fileupload.FileUploadBase.getItemIterator(FileUploadBase.java:331)
at org.camunda.bpm.engine.rest.mapper.MultipartPayloadProvider.parseRequest(MultipartPayloadProvider.java:87)
... 38 more
What might be the reason of this error? I understand that the question doesn't have the direct answer, but I hope someone is able point me into the right way for additional investigation.
P.S. GET type requests of this API work fine.
Sorted out! If someone is interested or acccidentaly faced the exatly same problem, the root cause was in CLRF, which are required for multi-part requests, when we were using System.lineSeparator() and thus it hadn't worked under Linux. Yeah, so simple.

406: not acceptable response received using LWP::UserAgent/File::Download

Edit: it seems the issue was caused by a dropped cookie. There should have been a session id cookie as well.
For posterity, here's the original question
When sending a request formed as this
GET https://<url>?<parameters>
Cache-Control: max-age=0
Connection: keep-alive
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Charset: iso-8859-1,utf-8,UTF-8
Accept-Encoding: gzip, x-gzip, deflate, x-bzip2
Accept-Language: en-US,en;q=0.5
If-None-Match: "6eb7d55abfd0546399e3245ad3a76090"
User-Agent: Mozilla/5.0 libwww-perl/6.13
Cookie: auth_token=<blah>; __cfduid=<blah>
Cookie2: $Version="1"
I receive the following response
response-type: text/html
charset=utf-8
HTTP/1.1 406 Not Acceptable
Cache-Control: no-cache
Connection: keep-alive
Date: Fri, 12 Feb 2016 18:34:00 GMT
Server: cloudflare-nginx
Content-Type: text/html; charset=utf-8
CF-RAY: 273a62969a9b288e-SJC
Client-Date: Fri, 12 Feb 2016 18:34:00 GMT
Client-Peer: <IP4>:443
Client-Response-Num: 10
Client-SSL-Cert-Issuer: /C=GB/ST=Greater Manchester/L=Salford/O=COMODO CA Limite
d/CN=COMODO ECC Domain Validation Secure Server CA 2
Client-SSL-Cert-Subject: /OU=Domain Control Validated/OU=PositiveSSL Multi-Domai
n/CN=ssl<blah>.cloudflaressl.com
Client-SSL-Cipher: <some value>
Client-SSL-Socket-Class: IO::Socket::SSL
Client-SSL-Warning: Peer certificate not verified
Client-Transfer-Encoding: chunked
Status: 406 Not Acceptable
X-Runtime: 9
I'm not entirely sure why the response is 406 Not Acceptable. When
downloaded with firefox, the file in question in 996 KB (as reported
by Windows 8.1's explorer). It looks like I have a partially
transferred file from my perl script at 991 KB (again, windows
explorer size), so it got MOST of the file before throwing the Not
Acceptable response. Using the same URL pattern and request style, I
was able to successfully download a 36 MB file from the server with
this perl library and request form, so the size of the file should not
be magically past some max (chunk) size. As these files are being
updated on approximately 15-minute intervals, I suppose it's possible
that a write was performed on the server, invalidating the ETag before
all chunks were complete on this file?
I tried adding chunked to Accept-Encoding, but that's not for
Transfer encoding and it appears to have no affect on the server's behavior. Additionally, as I've been able to download larger files
(same format) from the same server, that alone shouldn't be the cause
of my woes. LWP is supposed to be able to handle chunked data
returned by a response to GET (as per this newsgroup post).
The server in question is running nginx with Rack::Lint. The
particular server configuration (which I in no way control), throws
500 errors on its own attempts to send 304: not modified. This
caused me to write a workaround for File::Download (sub
lintWorkAround here), so I'm not above putting blame on the
server in this instance also, if warranted. I don't believe I buggered
up the chunk-handling code from File::Download 0.3 (see diff),
but I suppose that's also possible. Is it possible to request a
particular chunk size from the server?
I'm using LWP and libwww versions 6.13 in perl 5.18.2.
File::Download version is my own 0.4_050601.
So, what else could the 406 error mean? Is there a way to request that
the server temporarily cache/version control the entire file so that I
can download a given ETag'd file once the transfer begins?

How to search using Github API with enterprise

I'm trying to search through repositories, but I can't seem to figure it out with github enterprise edition. I have tried the following with no results. Any suggestions?
curl -i http://my.domain.com/api/v3/repositories "If-Modified-Since: Mon, 16 Jun 2014 01:01:01 CST"
curl -i http://my.domain.com/api/v3/search/repos?q=pushed:2014-06-17
HTTP/1.1 404 Not Found
Server: GitHub.com
Date: Wed, 18 Jun 2014 16:45:58 GMT
Content-Type: application/json; charset=utf-8
Connection: keep-alive
Status: 404 Not Found
X-GitHub-Media-Type: github.beta
X-Content-Type-Options: nosniff
Content-Length: 29
Access-Control-Allow-Credentials: true
Access-Control-Expose-Headers: ETag, Link, X-RateLimit-Limit, X-RateLimit-Remaining, X- RateLimit-Res
et, X-OAuth-Scopes, X-Accepted-OAuth-Scopes
Access-Control-Allow-Origin: *
X-GitHub-Request-Id: b4eec0e7-1b1a-48b7-81d8-d63c28b55b37
{
"message": "Not Found"
}
One of the nice things of Github's API both public and Enterprise, is if you go to the API root, it will tell you what endpoints are available. On an enterprise instance it is: http://my.domain.com/api/v3/. Looking at my company's enterprise instance (sorry not sure of the version), I only see the legacy search API endpoints.
As a result: http://my.domain.com/api/v3/legacy/repos/search/pushed:2014-06-17 is likely the search URL you are wanting.

file not load from browser cache with expires and cache control

I'm working on a gwt project and i set a proxy with apache2, mod_proxy, mod_expires and mod_headers to manage loadbalancing and cache.
All resources, are fine exept one XXX.cache.html. With Firefox/Firebug or chrome/developer Tools, i can see that it's the only file which is not "from cache". And it's the biggest file
The html file (generated by gwt compiler) does not contains meta with cache parameters.
I don't see what is wrong :
Request:
HeadersPreviewResponseCookiesTiming
Request URL:https://myproject.visionobjects.com//com.visionobjects.myscript.myProject/75797371ADDF8643260E34AC670CE051.cache.html
Request Method:GET
Status Code:200 OK
Request Headersview source
Accept:text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3
Accept-Encoding:gzip,deflate,sdch
Accept-Language:fr-FR,fr;q=0.8,en-US;q=0.6,en;q=0.4
Cache-Control:max-age=0
Connection:keep-alive
Cookie:__utma=215925392.462910615.1307714119.1324051842.1332755699.3; MYSCRIPTWSSESSONID=myproject-node1ali0jv5kfn371vbphmbhekcx9.myproject-node1; __utma=255591828.1472483096.1335971537.1348212480.1349343132.10; __utmb=255591828.15.10.1349343132; __utmc=255591828; __utmz=255591828.1335971537.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)
Host:myproject.visionobjects.com
Pragma:no-cache
User-Agent:Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.13 (KHTML, like Gecko) Chrome/24.0.1284.2 Safari/537.13
Response:
Accept-Ranges:bytes
Cache-Control:max-age=31536000, public
Connection:Keep-Alive
Content-Encoding:gzip
Content-Type:text/html
Date:Thu, 04 Oct 2012 10:12:43 GMT
Expires:Fri, 04 Oct 2013 10:12:43 GMT
Keep-Alive:timeout=15, max=98
Server:Jetty(7.6.5.v20120716)
Transfer-Encoding:chunked
Vary:Accept-Encoding
The request has Cache-Control: max-age=0 and Pragma: no-cache, so it's not about your server configuration.