I can't upload files bigger then 20 M to my S3 bucket - s3fs

I recently created a S3 bucket at Scaleway.
I mount it using s3fs without apparent problem.
I have problems uploading some "mid size" files.
If the size under 20 M it's ok but for with larger files (50 M and more), the copy fails with message "unable to write file, permission denied".
I contacter scaleway support but they said it's related to my s3fs client.
I mounted the bucket in debug mode, using :
$ sudo s3fs tellurix /mnt/scaleway/ -o passwd_file=${HOME}/.passwd-s3fs,url=https://s3.fr-par.scw.cloud,allow_other -o use_path_request_style,noatime -o dbglevel=info -f -o curldbg
I copy/paste the 100 last lines of the log, because I don't see where the error is .
Thanks a lot for help
* SSL_write() returned SYSCALL, errno = 32
* Closing connection 6
[ERR] curl.cpp:RequestPerform(2546): ### CURLE_SEND_ERROR
* SSL_write() returned SYSCALL, errno = 32
* Closing connection 5
[ERR] curl.cpp:RequestPerform(2546): ### CURLE_SEND_ERROR
[INF] curl.cpp:RequestPerform(2621): ### retrying...
[INF] curl.cpp:RemakeHandle(2248): Retry request. [type=9][url=https://s3.fr-par.scw.cloud/tellurix/ant/MyHome%20automation%20guide%2072488.pdf?partNumber=5&uploadId=YmNkMmE3MWMtMDFhYi00NDhmLTlkYWItMjEyMDA1YTM1Njk1][path=/ant/MyHome automation guide 72488.pdf]
* Hostname s3.fr-par.scw.cloud was found in DNS cache
* Trying 2001:bc8:1002::30:443...
* TCP_NODELAY set
* Connected to s3.fr-par.scw.cloud (2001:bc8:1002::30) port 443 (#6)
* successfully set certificate verify locations:
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* SSL re-using session ID
* SSL_write() returned SYSCALL, errno = 32
* Closing connection 5
[ERR] curl.cpp:RequestPerform(2546): ### CURLE_SEND_ERROR
* old SSL session ID is stale, removing
* SSL connection using TLSv1.2 / ECDHE-RSA-CHACHA20-POLY1305
* Server certificate:
* subject: CN=s3.fr-par.scw.cloud
* start date: Feb 10 23:20:22 2020 GMT
* expire date: May 10 23:20:22 2020 GMT
* subjectAltName: host "s3.fr-par.scw.cloud" matched cert's "s3.fr-par.scw.cloud"
* issuer: C=US; O=Let's Encrypt; CN=Let's Encrypt Authority X3
* SSL certificate verify ok.
> PUT /tellurix/ant/MyHome%20automation%20guide%2072488.pdf?partNumber=5&uploadId=YmNkMmE3MWMtMDFhYi00NDhmLTlkYWItMjEyMDA1YTM1Njk1 HTTP/1.1
Host: s3.fr-par.scw.cloud
User-Agent: s3fs/1.86 (commit hash 005a684; OpenSSL)
Accept: */*
Content-Length: 10485760
Expect: 100-continue
* SSL_write() returned SYSCALL, errno = 32
* Closing connection 6
[ERR] curl.cpp:RequestPerform(2546): ### CURLE_SEND_ERROR
* Mark bundle as not supporting multiuse
< HTTP/1.1 403 Forbidden
< x-amz-id-2: tx97bf2f1b3ccd47c4a5f91-005eaa999a
< x-amz-request-id: tx97bf2f1b3ccd47c4a5f91-005eaa999a
< Content-Type: application/xml
< Date: Thu, 30 Apr 2020 09:25:46 GMT
< Transfer-Encoding: chunked
* HTTP error before end of send, keep sending
<
[INF] curl.cpp:RequestPerform(2621): ### retrying...
[INF] curl.cpp:RemakeHandle(2248): Retry request. [type=9][url=https://s3.fr-par.scw.cloud/tellurix/ant/MyHome%20automation%20guide%2072488.pdf?partNumber=2&uploadId=YmNkMmE3MWMtMDFhYi00NDhmLTlkYWItMjEyMDA1YTM1Njk1][path=/ant/MyHome automation guide 72488.pdf]
[ERR] curl.cpp:RequestPerform(2639): ### giving up
[WAN] curl.cpp:MultiPerform(4340): thread failed - rc(-5)
[INF] curl.cpp:insertV4Headers(2797): computing signature [PUT] [/ant/MyHome automation guide 72488.pdf] [partNumber=6&uploadId=YmNkMmE3MWMtMDFhYi00NDhmLTlkYWItMjEyMDA1YTM1Njk1] [34ec149b334729973e407bada5e11b96774acfd1375b8009f789474ecb9bb2bb]
[INF] curl.cpp:url_to_host(99): url is https://s3.fr-par.scw.cloud
* Hostname s3.fr-par.scw.cloud was found in DNS cache
* Trying 2001:bc8:1002::30:443...
* TCP_NODELAY set
* Connected to s3.fr-par.scw.cloud (2001:bc8:1002::30) port 443 (#7)
* successfully set certificate verify locations:
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* SSL re-using session ID
* SSL connection using TLSv1.2 / ECDHE-RSA-CHACHA20-POLY1305
* Server certificate:
* subject: CN=s3.fr-par.scw.cloud
* start date: Feb 10 23:20:22 2020 GMT
* expire date: May 10 23:20:22 2020 GMT
* subjectAltName: host "s3.fr-par.scw.cloud" matched cert's "s3.fr-par.scw.cloud"
* issuer: C=US; O=Let's Encrypt; CN=Let's Encrypt Authority X3
* SSL certificate verify ok.
> PUT /tellurix/ant/MyHome%20automation%20guide%2072488.pdf?partNumber=6&uploadId=YmNkMmE3MWMtMDFhYi00NDhmLTlkYWItMjEyMDA1YTM1Njk1 HTTP/1.1
Host: s3.fr-par.scw.cloud
User-Agent: s3fs/1.86 (commit hash 005a684; OpenSSL)
Authorization: AWS4-HMAC-SHA256 Credential=xxxxxx/20200430/fr-par/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=91bbf50cc33a1f1d1cd3f3660fcc116e857223b4f8297b6c796e7dc32f244bac
x-amz-content-sha256: 34ec149b334729973e407bada5e11b96774acfd1375b8009f789474ecb9bb2bb
x-amz-date: 20200430T092546Z
Content-Length: 1132789
Expect: 100-continue
[INF] curl.cpp:RequestPerform(2621): ### retrying...
[INF] curl.cpp:RemakeHandle(2248): Retry request. [type=9][url=https://s3.fr-par.scw.cloud/tellurix/ant/MyHome%20automation%20guide%2072488.pdf?partNumber=1&uploadId=YmNkMmE3MWMtMDFhYi00NDhmLTlkYWItMjEyMDA1YTM1Njk1][path=/ant/MyHome automation guide 72488.pdf]
[ERR] curl.cpp:RequestPerform(2639): ### giving up
* Mark bundle as not supporting multiuse
< HTTP/1.1 100 Continue
* SSL_write() returned SYSCALL, errno = 32
* Closing connection 6
[ERR] curl.cpp:RequestPerform(2546): ### CURLE_SEND_ERROR
[INF] curl.cpp:RequestPerform(2621): ### retrying...
[INF] curl.cpp:RemakeHandle(2248): Retry request. [type=9][url=https://s3.fr-par.scw.cloud/tellurix/ant/MyHome%20automation%20guide%2072488.pdf?partNumber=3&uploadId=YmNkMmE3MWMtMDFhYi00NDhmLTlkYWItMjEyMDA1YTM1Njk1][path=/ant/MyHome automation guide 72488.pdf]
[ERR] curl.cpp:RequestPerform(2639): ### giving up
[INF] curl.cpp:RequestPerform(2621): ### retrying...
[INF] curl.cpp:RemakeHandle(2248): Retry request. [type=9][url=https://s3.fr-par.scw.cloud/tellurix/ant/MyHome%20automation%20guide%2072488.pdf?partNumber=4&uploadId=YmNkMmE3MWMtMDFhYi00NDhmLTlkYWItMjEyMDA1YTM1Njk1][path=/ant/MyHome automation guide 72488.pdf]
[ERR] curl.cpp:RequestPerform(2639): ### giving up
[INF] curl.cpp:RequestPerform(2621): ### retrying...
[INF] curl.cpp:RemakeHandle(2248): Retry request. [type=9][url=https://s3.fr-par.scw.cloud/tellurix/ant/MyHome%20automation%20guide%2072488.pdf?partNumber=5&uploadId=YmNkMmE3MWMtMDFhYi00NDhmLTlkYWItMjEyMDA1YTM1Njk1][path=/ant/MyHome automation guide 72488.pdf]
[ERR] curl.cpp:RequestPerform(2639): ### giving up
* We are completely uploaded and fine
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Content-Length: 0
< x-amz-id-2: tx64fa48b5fffb4985bee17-005eaa999a
< Last-Modified: Thu, 30 Apr 2020 09:25:46 GMT
< ETag: "30c5132a619a14608ff0a3d9bac63fe2"
< x-amz-request-id: tx64fa48b5fffb4985bee17-005eaa999a
< x-amz-version-id: 1588238746862950
< Content-Type: text/html; charset=UTF-8
< Date: Thu, 30 Apr 2020 09:25:59 GMT
<
* Connection #7 to host s3.fr-par.scw.cloud left intact
[INF] curl.cpp:RequestPerform(2455): HTTP response code 200
[WAN] curl.cpp:MultiPerform(4374): thread failed - rc(-5)
[WAN] curl.cpp:MultiPerform(4374): thread failed - rc(-5)
[WAN] curl.cpp:MultiPerform(4374): thread failed - rc(-5)
[WAN] curl.cpp:MultiPerform(4374): thread failed - rc(-5)
[WAN] curl.cpp:MultiRead(4400): error from callback function(https://s3.fr-par.scw.cloud/tellurix/ant/MyHome%20automation%20guide%2072488.pdf?partNumber=1&uploadId=YmNkMmE3MWMtMDFhYi00NDhmLTlkYWItMjEyMDA1YTM1Njk1).
[WAN] curl.cpp:MultiRead(4400): error from callback function(https://s3.fr-par.scw.cloud/tellurix/ant/MyHome%20automation%20guide%2072488.pdf?partNumber=2&uploadId=YmNkMmE3MWMtMDFhYi00NDhmLTlkYWItMjEyMDA1YTM1Njk1).
[WAN] curl.cpp:MultiRead(4400): error from callback function(https://s3.fr-par.scw.cloud/tellurix/ant/MyHome%20automation%20guide%2072488.pdf?partNumber=3&uploadId=YmNkMmE3MWMtMDFhYi00NDhmLTlkYWItMjEyMDA1YTM1Njk1).
[WAN] curl.cpp:MultiRead(4400): error from callback function(https://s3.fr-par.scw.cloud/tellurix/ant/MyHome%20automation%20guide%2072488.pdf?partNumber=4&uploadId=YmNkMmE3MWMtMDFhYi00NDhmLTlkYWItMjEyMDA1YTM1Njk1).
[WAN] curl.cpp:MultiRead(4400): error from callback function(https://s3.fr-par.scw.cloud/tellurix/ant/MyHome%20automation%20guide%2072488.pdf?partNumber=5&uploadId=YmNkMmE3MWMtMDFhYi00NDhmLTlkYWItMjEyMDA1YTM1Njk1).
[INF] curl.cpp:CompleteMultipartPostRequest(3642): [tpath=/ant/MyHome automation guide 72488.pdf][parts=6]
[ERR] curl.cpp:CompleteMultipartPostRequest(3653): 1 file part is not finished uploading.
[INF] s3fs.cpp:s3fs_release(2358): [path=/ant/MyHome automation guide 72488.pdf][fd=11]
[INF] cache.cpp:DelStat(582): delete stat cache entry[path=/ant/MyHome automation guide 72488.pdf]
[INF] fdcache.cpp:GetFdEntity(2650): [path=/ant/MyHome automation guide 72488.pdf][fd=11]

I successfully mounted and wrote a 500 MB file to scaleway using your command-line arguments. Given the CURLE_SEND_ERROR I wonder if you have some kind of network problem? Maybe try a lower value for -o parallel_count, e.g., 1? See https://github.com/s3fs-fuse/s3fs-fuse/issues/1283#issuecomment-623026911 for the resolution.

From where do you mount your bucket? Is it your PC in your home or a cloud VM? How much time does it take before you receive this error?
I'm asking because "SSL_write() returned SYSCALL, errno = 32" looks like something is closing your connection. "HTTP error before end of send, keep sending" also points for that kind of problem. A timeout maybe occurs? Do you have a NAT gateway between you and your bucket? That can also cause the problem, if it does not care about keepalives as the upload can take relatively long.
As the s3fs wiki says, 20MB is the threshold for multipart uploads instead of single request. Maybe Scaleway has a slightly different API for multipart uploads than Amazon? From the s3fs wiki: "Some providers do not support the full S3 API, e.g., lacking multi-part upload." Please make note that s3fs is mainly intended to work with Amazon S3 and, as I see, Scaleway is not on the list of supported providers in the s3fs wiki: https://github.com/s3fs-fuse/s3fs-fuse/wiki/Non-Amazon-S3.
The last thing, what's your version of libcurl? The s3fs documentation says it should be 7.16 or 7.17. And are you using the latest version of s3fs?

Related

Mount a bucket using S3FS doesn't work as non-root user

I'm trying to mount an Exoscale bucket to an Exoscale VM running Ubuntu 20.04 using s3fs as the ubuntu user created by default by Exoscale. After reading the s3fs README and a few online tutorials here what I have done.
# install s3fs
sudo apt-get install -y s3fs
# create a password file with the right permissions
echo API-KEY:API-SECRET > /home/ubuntu/.passwd-s3fs
chmod 600 /home/ubuntu/.passwd-s3fs
# mount the bucket
sudo mkdir /home/ubuntu/bucket
sudo s3fs test-bucket /home/ubuntu/bucket -o passwd_file=${HOME}/.passwd-s3fs -o url=https://sos-bg-sof-1.exo.io
The command doesn't output anything.
if I try to see the rights on the directory I get
$ ls -l
ls: cannot access 'bucket': Permission denied
total 0
d????????? ? ? ? ? ? bucket
If I try to run it with debug output enabled, I get
sudo s3fs test-bucket /home/ubuntu/bucket -o passwd_file=${HOME}/.passwd-s3fs -o url=https://sos-bg-sof-1.exo.io -o dbglevel=info -f -o curldbg
[CRT] s3fs.cpp:set_s3fs_log_level(297): change debug level from [CRT] to [INF]
[INF] s3fs.cpp:set_mountpoint_attribute(4400): PROC(uid=0, gid=0) - MountPoint(uid=1000, gid=1000, mode=40775)
[INF] s3fs.cpp:s3fs_init(3493): init v1.86(commit:unknown) with GnuTLS(gcrypt)
[INF] s3fs.cpp:s3fs_check_service(3828): check services.
[INF] curl.cpp:CheckBucket(3413): check a bucket.
[INF] curl.cpp:prepare_url(4703): URL is https://sos-bg-sof-1.exo.io/test-bucket/
[INF] curl.cpp:prepare_url(4736): URL changed is https://test-bucket.sos-bg-sof-1.exo.io/
[INF] curl.cpp:insertV4Headers(2753): computing signature [GET] [/] [] []
[INF] curl.cpp:url_to_host(99): url is https://sos-bg-sof-1.exo.io
* Trying 194.182.177.119:443...
* TCP_NODELAY set
* Connected to test-bucket.sos-bg-sof-1.exo.io (194.182.177.119) port 443 (#0)
* found 414 certificates in /etc/ssl/certs
* SSL connection using TLS1.2 / ECDHE_RSA_AES_256_GCM_SHA384
* server certificate verification OK
* server certificate status verification SKIPPED
* common name: *.sos-bg-sof-1.exo.io (matched)
* server certificate expiration date OK
* server certificate activation date OK
* certificate public key: RSA
* certificate version: #3
* subject: OU=Domain Control Validated,OU=Gandi Standard Wildcard SSL,CN=*.sos-bg-sof-1.exo.io
* start date: Mon, 22 Apr 2019 00:00:00 GMT
* expire date: Thu, 22 Apr 2021 23:59:59 GMT
* issuer: C=FR,ST=Paris,L=Paris,O=Gandi,CN=Gandi Standard SSL CA 2
> GET / HTTP/1.1
Host: test-bucket.sos-bg-sof-1.exo.io
User-Agent: s3fs/1.86 (commit hash unknown; GnuTLS(gcrypt))
Accept: */*
Authorization: AWS4-HMAC-SHA256 Credential=EXO6ff92566c0d6283678d65a81/20201209/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=22c63079ecfa9bf2f36f1da8e39835172bb1ce3cc59d62484cc0377c854571d4
x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
x-amz-date: 20201209T141053Z
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< server: nginx
< date: Wed, 09 Dec 2020 14:10:53 GMT
< content-type: application/xml
< content-length: 262
< vary: Accept-Encoding
< x-amz-bucket-region: bg-sof-1
< x-amz-request-id: 8536e881-5897-4843-b20e-891f734ccb2a
< x-amzn-requestid: 8536e881-5897-4843-b20e-891f734ccb2a
< x-amz-id-2: 8536e881-5897-4843-b20e-891f734ccb2a
<
* Connection #0 to host ubuntu-bucket-production-120920.sos-bg-sof-1.exo.io left intact
[INF] curl.cpp:RequestPerform(2416): HTTP response code 200
[INF] curl.cpp:ReturnHandler(318): Pool full: destroy the oldest handler
Which doesn't output any evident error.
Am I missing anything here? .
Doing the same as root works as expected. However, I'm planning to disable root access so I need to make it work as ubuntu.
To make it work, I had to add the following options:
sudo s3fs test-bucket /home/ubuntu/bucket -o passwd_file=${HOME}/.passwd-s3fs -o url=https://sos-bg-sof-1.exo.io -ouid=1000,gid=1000,allow_other,mp_umask=002
Hope this may help others.
#Sig answer worked for me like a charm.. thanks!

HAProxy 1.8 delay http/2 (h2) requests using tcp-request inspect-delay

Using HAProxy 1.8, I want to slow down certain traffic. This all works when testing over HTTP 1.1. However as soon as http/2 (h2) is enabled in HAProxy, the 10s delay is no longer taking effect. How can I delay h2 requests?
frontend web
bind [...] alpn h2,http/1.1
tcp-request inspect-delay 10s
tcp-request content accept if WAIT_END
[...]
I'm testing using curl:
time curl -I 'https://[url]/' -v
* Trying 10.233.1.97...
* TCP_NODELAY set
* Connected to [url] (10.233.1.97) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
[...]
* ALPN, server accepted to use h2
[...]
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x7fd3f5808200)
> GET / HTTP/2
> Host: [...]
> User-Agent: curl/7.64.1
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 100)!
< HTTP/2 411
HTTP/2 411
< content-type: text/html; charset=us-ascii
content-type: text/html; charset=us-ascii
< server: Microsoft-HTTPAPI/2.0
server: Microsoft-HTTPAPI/2.0
< date: Thu, 02 Apr 2020 19:18:22 GMT
date: Thu, 02 Apr 2020 19:18:22 GMT
< content-length: 344
content-length: 344
<
* Excess found in a non pipelined read: excess = 344 url = / (zero-length body)
* Connection #0 to host app.cloudbilling.nl left intact
* Closing connection 0
curl -I 'https://[url]/' -v 0.02s user 0.01s system 28% cpu 0.101 total

Cannot retrieve file list from Azure File Storage using REST API and curl

I'm trying to retrieve the list of files stored in an Azure File Storage account using the REST API and curl, I correctly computed headers according to the documentation by using the shared key , but curl request neither returns the files list nor any error message.
Here is my request and the response:
curl -v -H "Authorization: SharedKey myaccount:bAJKeY0xyOZLSJOLDoHfXXOqfA4kOGo1DVFP3BejhY8=" -H "x-ms-date:Mon, 13 Aug 2018 15:22:31 GMT" -H "x-ms-version:2017-07-29" --url https://myaccount.file.core.windows.net/myshare/mydir?restype=directory&comp=list
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 52.239.140.8...
* Connected to myaccount.file.core.windows.net (52.239.140.8) port 443 (#0)
* found 148 certificates in /etc/ssl/certs/ca-certificates.crt
* found 597 certificates in /etc/ssl/certs
* ALPN, offering http/1.1
* SSL connection using TLS1.2 / ECDHE_RSA_AES_256_GCM_SHA384
* server certificate verification OK
* server certificate status verification SKIPPED
* common name: *.file.core.windows.net (matched)
* server certificate expiration date OK
* server certificate activation date OK
* certificate public key: RSA
* certificate version: #3
* subject: CN=*.file.core.windows.net
* start date: Thu, 09 Nov 2017 05:42:03 GMT
* expire date: Sat, 09 Nov 2019 05:42:03 GMT
* issuer: C=US,ST=Washington,L=Redmond,O=Microsoft Corporation,OU=Microsoft IT,CN=Microsoft IT TLS CA 5
* compression: NULL
* ALPN, server did not agree to a protocol
GET /myshare/mydir?restype=directory HTTP/1.1
Host: myaccount.file.core.windows.net
User-Agent: curl/7.47.0
Accept: */*
Authorization: SharedKey
myaccount:bAJKeY0xyOZLSJOLDoHfXXOqfA4kOGo1DVFP3BejhY8=
x-ms-date:Mon, 13 Aug 2018 15:22:31 GMT
x-ms-version:2017-07-29
HTTP/1.1 200 OK
Transfer-Encoding: chunked
Last-Modified: Fri, 27 Apr 2018 16:11:14 GMT
ETag: "0x8D5AC597FF96B3D"
Server: Windows-Azure-File/1.0 Microsoft-HTTPAPI/2.0
x-ms-request-id: 75d6d7c8-f01a-0011-5b19-33104d000000
x-ms-version: 2017-07-29
x-ms-server-encrypted: true
Date: Mon, 13 Aug 2018 15:22:29 GMT
{ [5 bytes data]
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
* Connection #0 to host myaccount.file.core.windows.net left intact
No XML with file list is returned.
I tried to retrieve the share list under myaccount and it works, as well as downloading a single file, but I cannot receive the list of files under a directory.
Two points:
See url in curl command
--url https://myaccount.file.core.windows.net/myshare/mydir?restype=directory&comp=list
You forget to put the url in "" so the parameter &comp=list is cut because & is a reserved sign. This is also proved by the output GET /myshare/mydir?restype=directory HTTP/1.1.
Commonly speaking, if the url misses the comp parameter, we should get error message AuthenticationFailed because comp is used in generating SharedKey. However you get HTTP/1.1 200 OK with the SharedKey.
Based on the response headers you get, I guess you also missed the comp when constructing the SharedKey, so the SharedKey and url is capable to get directory properties correctly.

Deployment fails with a 504

I'm trying to deploy an application to the Swisscom App Cloud from the console. It reports progress until at the end, an 504 without further explanation is reported:
Updating app helloclass-fe-develop in org UCID-Bern Team / space HELLOCLASS-TEST as christian.cueni#iterativ.ch...
OK
Uploading helloclass-fe-develop...
FAILED
Error processing app files: Error uploading application.
Server error, status code: 504, error code: 0, message:
The log of the app reports that the app has been updated:
2017-01-03 09:37:39 [RTR/0] OUT helloclass-develop.scapp.io - [03/01/2017:08:37:39.584 +0000] "GET / HTTP/1.1" 200 0 594 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.75 Safari/537.36 Google Favicon" 66.249.93.201:50868 10.0.18.35:64341 x_forwarded_for:"83.76.152.96" x_forwarded_proto:"https" vcap_request_id:8a8adcc7-9e97-4bd9-4492-68e92883ee3d response_time:0.001739219 app_id:310166b4-f3a6-4168-a9ac-530e45dbfb10 app_index:0
2017-01-03 09:37:39 [APP/PROC/WEB/0] OUT 83.76.152.96, 66.249.93.201, 66.249.93.201 - - - [03/Jan/2017:08:37:39 +0000] "GET / HTTP/1.1" 200 606
2017-01-03 10:05:50 [API/2] OUT Updated app with guid 310166b4-f3a6-4168-a9ac-530e45dbfb10 ({"name"=>"helloclass-fe-develop"})
2017-01-03 10:57:15 [API/1] OUT Updated app with guid 310166b4-f3a6-4168-a9ac-530e45dbfb10 ({"state"=>"STOPPED"})
2017-01-03 10:57:15 [CELL/0] OUT Exit status 0
2017-01-03 10:57:15 [APP/PROC/WEB/0] OUT Exit status 0
2017-01-03 10:57:15 [CELL/0] OUT Destroying container
2017-01-03 10:57:15 [CELL/0] OUT Successfully destroyed container
2017-01-03 10:57:16 [API/1] OUT Updated app with guid 310166b4-f3a6-4168-a9ac-530e45dbfb10 ({"state"=>"STARTED"})
2017-01-03 10:57:16 [CELL/0] OUT Creating container
2017-01-03 10:57:16 [CELL/0] OUT Successfully created container
2017-01-03 10:57:17 [CELL/0] OUT Starting health monitoring of container
2017-01-03 10:57:19 [CELL/0] OUT Container became healthy
In spite of those messages which would indicate that the app has been updated, I still see the old version of the app being served.
EDIT
After running the command with the -v parameter, I see that the reason for the failure is a gateway timeout:
RESPONSE: [2017-01-03T13:32:39+01:00]
HTTP/1.1 504 Gateway Timeout
Connection: close
Content-Length: 176
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Content-Type: text/html
Date: Tue, 03 Jan 2017 12:32:39 GMT
Expires: 0
Pragma: no-cache
Strict-Transport-Security: max-age=15768000; includeSubDomains
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-Vcap-Request-Id: 3ac831ef-e70b-4f4e-7c56-e308806f039e
X-Xss-Protection: 1; mode=block
<html>
<head><title>504 Gateway Time-out</title></head>
<body bgcolor="white">
<center><h1>504 Gateway Time-out</h1></center>
<hr><center>nginx</center>
</body>
</html>
FAILED
Error processing app files: Error uploading application.
Server error, status code: 504, error code: 0, message:
Is this something cloudfoundry specific or rather related to Swisscom AppCloud? Are there cloudfoundry inherent timeout limits?
You can run cf push with -v or enable CF_TRACE to see more of the interaction of the CLI with your CF endpoint.
The error message looks similar to https://github.com/cloudfoundry/cli/issues/1042: the Cloud Controller could not complete a request in time and the router that routed the API request to Cloud Controller did not wait any longer and returned the 504 (Gateway timeout) to the CLI.
The trace should tell you which API call timed out.
The CLI aborted the operation there, while the Cloud Controller may have completed the operation successfully, eventually.
I would have thought the operations the CLI would perform here are:
send a list of files in your app and their checksums for resource matching (so it can skip uploading unmodified app bits that the CC cached from a previous push)
upload app files
(re)start app (which includes staging)
poll & wait until an app instance returns that it's running
From your CLI output I assume the first operation timed out, so not clear how your app was restarted.

Facebook API - timed out before SSL handshake

I'm facing this issue with the Facebook PHP SDK. My application is hosted on AWS EC2 (Virginia).
It's randomly happening but has recently increased. I've read that it was necessary to specified some cURL options, so I've done so:
self::$CURL_OPTS[CURLOPT_IPRESOLVE] = CURL_IPRESOLVE_V4;
self::$CURL_OPTS[CURLOPT_SSLVERSION] = 3;
self::$CURL_OPTS[CURLOPT_CONNECTTIMEOUT] = 20;
Because IPv6 is not supported on EC2 instance, we need to force IPv4
I've read to force SSL version 3
I've tried to increase the timeout from 10 to 20 seconds
I'm still getting the following error:
FacebookAPIException: timed out before SSL handshake
Which I believe is not a Facebook exception but a cURL exception.
I can't really enable the verbose mode because I have many requests and just a small percentage is failing at the moment.
Anyone is having the same issue?
My system:
Centos Linux 2.6.32-431.5.1.el6.x86_64 #1 SMP Wed Feb 12 00:41:43 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
libcurl.x86_64 7.19.7-37.el6_4
php curl
cURL support => enabled
cURL Information => 7.19.7
...
Protocols => tftp, ftp, telnet, dict, ldap, ldaps, http, file, https, ftps, scp, sftp
Host => x86_64-redhat-linux-gnu
SSL Version => NSS/3.14.0.0
ZLib Version => 1.2.3
libSSH Version => libssh2/1.4.2
UPDATE
I've opened a Facebook bug here: https://developers.facebook.com/x/bugs/1461144600769806/
UPDATE 2
My facebook bug has been closed without any useful answer.
I've managed to log the verbose debug of cURL for this error:
Verbose log:
* About to connect() to graph.facebook.com port 443 (#0)
* Trying 173.252.100.27... * connected
* Connected to graph.facebook.com (173.252.100.27) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* timed out before SSL handshake
* NSS error -5978
* Closing connection #0
On success cURL is doing the following:
Verbose log:
* About to connect() to graph.facebook.com port 443 (#0)
* Trying 173.252.112.23... * connected
* Connected to graph.facebook.com (173.252.112.23) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* SSL connection using SSL_RSA_WITH_RC4_128_SHA
* Server certificate:
* subject: CN=*.facebook.com,O="Facebook, Inc.",L=Palo Alto,ST=California,C=US
* start date: Oct 28 00:00:00 2013 GMT
* expire date: Aug 05 23:59:59 2015 GMT
* common name: *.facebook.com
* issuer: CN=VeriSign Class 3 Secure Server CA - G3,OU=Terms of use at https://www.verisign.com/rpa (c)10,OU=VeriSign Trust Network,O="VeriSign, Inc.",C=US
> POST /xxxxxxxxxxx/feed HTTP/1.1
User-Agent: facebook-php-3.2
Host: graph.facebook.com
Accept: */*
Content-Length: 244
Content-Type: application/x-www-form-urlencoded
Errors happen randomly. So is it coming from Facebook of from my server?