How to configure docker and postgres to use pact broker - postgresql

localhost to view pacts
I have installed brew version of postgres and docker.
Followed the steps in the link https://github.com/DiUS/pact_broker-docker/blob/master/POSTGRESQL.md to create a dockerised pact broker and postgres.
When I run the first command, container got created with error:
docker run --name pactbroker-db -e POSTGRES_PASSWORD=ThePostgresPassword -e POSTGRES_USER=admin -e PGDATA=/var/lib/postgresql/data/pgdata -v /var/lib/postgresql/data:/var/lib/postgresql/data -d postgres
Response:
b8a2007e5dac9554e0ac615147d74467ceb6043dba027a4a21388721cee8f34c
docker: Error response from daemon: Mounts denied:
The path /var/lib/postgresql/data
is not shared from OS X and is not known to Docker.
You can configure shared paths from Docker -> Preferences... -> File Sharing...
Somehow managed to bypass the first step by removing the bind volume option:
docker run --name pactbroker-db -e POSTGRES_PASSWORD=ThePostgresPassword -e POSTGRES_USER=admin -e PGDATA=/var/lib/postgresql/data/pgdata -d postgres
Succeeded 2 and 3 steps in the link specified:
(2)Connect to the container and execute psql via:
(3)Start the PactBroker container via:
After this, tried to curl json using the below command:
curl -v -XPUT -H “Content-Type: application/json” -d #/HelloWorldConsumer-HelloWorldProvider.json http://localhost/pacts/provider/HelloWorldProvider/consumer/HelloWorldConsumer/version/1.0
Getting the below response...
Could not resolve host: application
* Closing connection 0
curl: (6) Could not resolve host: application
* Trying ::1...
* connect to ::1 port 80 failed: Connection refused
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 80 (#1)
> PUT /pacts/provider/HelloWorldProvider/consumer/HelloWorldConsumer/version/1.0 HTTP/1.1
> Host: localhost
> User-Agent: curl/7.49.1
> Accept: */*
> Content-Length: 756
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 756 out of 756 bytes
< HTTP/1.1 415 Unsupported Media Type
< Content-Type: application/json;charset=utf-8
< Content-Length: 0
< Connection: keep-alive
< Status: 415 Unsupported Media Type
< Date: Tue, 07 Feb 2017 17:08:40 GMT
< Server: Webmachine-Ruby/1.4.0 Rack/1.2
< X-Powered-By: Phusion Passenger 5.0.15
<
* Connection #1 to host localhost left intact
Not sure whether its success or failure as the first line saying... 'Couldn't resolve host:application' and 'Closing connection'
And when I tried to view the localhost, it appears blank.
Screenshot attached.
Looking out for help as early as possible !! Thanks in advance...

It seems you are attempting to mount a volume on your host machine (/var/lib/postgresql/data) that does not exist.
docker: Error response from daemon: Mounts denied:
The path /var/lib/postgresql/data
is not shared from OS X and is not known to Docker.
The message clearly states this. You should read more about docker volumes, but I'd suggest you mount another directory if this is for development on your Mac.
Secondly, you can see that you're getting an "Unsupported media type" on the upload so it has definitely failed:
HTTP/1.1 415 Unsupported Media Type
It appears as though the Content-Type is not being correctly set, you can see this in the output:
Content-Type: application/x-www-form-urlencoded
Please check that the file actually exists at path /HelloWorldConsumer-HelloWorldProvider.json, that it is a valid JSON file, and that your content type header is formatted correctly (it appears to be).

Related

How to run web server docker container in Azure devops pipeline and HTTP access it?

Please help understand the cause and solution for the issue in Azure Devops pipeline.
Trying to run a docker container which runs a web server inside inside a Azure devops pipeline as a step.
docker pull ${CONTAINER_IMAGE}
CONTAINER_ID=$(docker run -d --rm \
--cidfile cid.log \
-p 108080:8080 \
${CONTAINER_IMAGE}
)
echo "container id is ${CONTAINER_ID}"
docker ps
echo "--------------------------------------------------------------------------------"
echo "docker container ip"
IPS=$(docker container inspect --format '{{range .NetworkSettings.Networks}}{{.IPAddress}} {{end}}' ${CONTAINER_ID})
HOST=$(echo ${IPS[0]} | xargs)
echo $HOST
echo "--------------------------------------------------------------------------------"
echo "Testing web /health response from the container..."
curl -v http://${HOST}:8080/health
When run on a laptop, it works.
pulling the container image ****
...
Digest: sha256:9edc6a55118f0909cf7120a53837ae62b8e65154bc30b89630f4f82bc0c4add7
...
**Starting the container ****...
container id is 088a28329d236582f0757862cb5bba2172ddb40c3315394db11ab39265f155b3
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
088a28329d23 **** "/bin/sh -c 'gunicor…" 6 seconds ago Up 5 seconds 0.0.0.0:18080->8080/tcp quirky_murdock
ccfc76e321aa gcr.io/inverting-proxy/agent "/bin/sh -c '/opt/bi…" 5 minutes ago Up 5 minutes proxy-agent
--------------------------------------------------------------------------------
docker container ip
172.17.0.2
--------------------------------------------------------------------------------
Testing web /health response from the container...
* Expire in 0 ms for 6 (transfer 0x55c1870780f0)
* Trying 172.17.0.2...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x55c1870780f0)
* Connected to 172.17.0.2 (172.17.0.2) port 8080 (#0)
> GET /health HTTP/1.1
> Host: 172.17.0.2:8080
> User-Agent: curl/7.64.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: gunicorn
< Date: Mon, 08 Aug 2022 06:44:12 GMT
< Connection: close
< Content-Type: application/json
< Content-Length: 0
<
* Closing connection 0**
However, it does not work in Azure DevOps CI pipeline.
++ docker run -d --rm --cidfile cid.log -p 18080:8080 *****
+ CONTAINER_ID=9c0f7c528b979651079d9f066c80cb9a26ec1af18415a0df5d269d252dfad0cb
+ echo 'container id is 9c0f7c528b979651079d9f066c80cb9a26ec1af18415a0df5d269d252dfad0cb'
container id is 9c0f7c528b979651079d9f066c80cb9a26ec1af18415a0df5d269d252dfad0cb
+ echo --------------------------------------------------------------------------------
+ echo 'listing docker processes...'
+ docker ps
--------------------------------------------------------------------------------
listing docker processes...
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
+ echo --------------------------------------------------------------------------------
--------------------------------------------------------------------------------
docker container ip
+ echo 'docker container ip'
+ docker container inspect --format '{{range .NetworkSettings.Networks}}{{.IPAddress}} {{end}}' 9c0f7c528b979651079d9f066c80cb9a26ec1af18415a0df5d269d252dfad0cb
Error: No such container: 9c0f7c528b979651079d9f066c80cb9a26ec1af18415a0df5d269d252dfad0cb
Related issues
There is an issue reported.
Cannot conect to Docker container running in VSTS
The problem is that the VSTS build agent runs in a Docker container. When the Docker container for Apache is started, it runs on the same level as the VSTS build agent Docker container, not nested inside the VSTS build agent Docker container.
There are two possible solutions:
Replacing localhost with the ip address of the docker host, keeping the port number 8083
Replacing localhost with the ip address of the docker container, changing the host port number 8083 to the container port number 80.
Followed the solution as in the code above but did not work.
Please change docker ps with docker ps -a and check the logs of the died container for the reason of failure.
you can see the logs using docker logs <container-id>

Mount a bucket using S3FS doesn't work as non-root user

I'm trying to mount an Exoscale bucket to an Exoscale VM running Ubuntu 20.04 using s3fs as the ubuntu user created by default by Exoscale. After reading the s3fs README and a few online tutorials here what I have done.
# install s3fs
sudo apt-get install -y s3fs
# create a password file with the right permissions
echo API-KEY:API-SECRET > /home/ubuntu/.passwd-s3fs
chmod 600 /home/ubuntu/.passwd-s3fs
# mount the bucket
sudo mkdir /home/ubuntu/bucket
sudo s3fs test-bucket /home/ubuntu/bucket -o passwd_file=${HOME}/.passwd-s3fs -o url=https://sos-bg-sof-1.exo.io
The command doesn't output anything.
if I try to see the rights on the directory I get
$ ls -l
ls: cannot access 'bucket': Permission denied
total 0
d????????? ? ? ? ? ? bucket
If I try to run it with debug output enabled, I get
sudo s3fs test-bucket /home/ubuntu/bucket -o passwd_file=${HOME}/.passwd-s3fs -o url=https://sos-bg-sof-1.exo.io -o dbglevel=info -f -o curldbg
[CRT] s3fs.cpp:set_s3fs_log_level(297): change debug level from [CRT] to [INF]
[INF] s3fs.cpp:set_mountpoint_attribute(4400): PROC(uid=0, gid=0) - MountPoint(uid=1000, gid=1000, mode=40775)
[INF] s3fs.cpp:s3fs_init(3493): init v1.86(commit:unknown) with GnuTLS(gcrypt)
[INF] s3fs.cpp:s3fs_check_service(3828): check services.
[INF] curl.cpp:CheckBucket(3413): check a bucket.
[INF] curl.cpp:prepare_url(4703): URL is https://sos-bg-sof-1.exo.io/test-bucket/
[INF] curl.cpp:prepare_url(4736): URL changed is https://test-bucket.sos-bg-sof-1.exo.io/
[INF] curl.cpp:insertV4Headers(2753): computing signature [GET] [/] [] []
[INF] curl.cpp:url_to_host(99): url is https://sos-bg-sof-1.exo.io
* Trying 194.182.177.119:443...
* TCP_NODELAY set
* Connected to test-bucket.sos-bg-sof-1.exo.io (194.182.177.119) port 443 (#0)
* found 414 certificates in /etc/ssl/certs
* SSL connection using TLS1.2 / ECDHE_RSA_AES_256_GCM_SHA384
* server certificate verification OK
* server certificate status verification SKIPPED
* common name: *.sos-bg-sof-1.exo.io (matched)
* server certificate expiration date OK
* server certificate activation date OK
* certificate public key: RSA
* certificate version: #3
* subject: OU=Domain Control Validated,OU=Gandi Standard Wildcard SSL,CN=*.sos-bg-sof-1.exo.io
* start date: Mon, 22 Apr 2019 00:00:00 GMT
* expire date: Thu, 22 Apr 2021 23:59:59 GMT
* issuer: C=FR,ST=Paris,L=Paris,O=Gandi,CN=Gandi Standard SSL CA 2
> GET / HTTP/1.1
Host: test-bucket.sos-bg-sof-1.exo.io
User-Agent: s3fs/1.86 (commit hash unknown; GnuTLS(gcrypt))
Accept: */*
Authorization: AWS4-HMAC-SHA256 Credential=EXO6ff92566c0d6283678d65a81/20201209/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=22c63079ecfa9bf2f36f1da8e39835172bb1ce3cc59d62484cc0377c854571d4
x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
x-amz-date: 20201209T141053Z
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< server: nginx
< date: Wed, 09 Dec 2020 14:10:53 GMT
< content-type: application/xml
< content-length: 262
< vary: Accept-Encoding
< x-amz-bucket-region: bg-sof-1
< x-amz-request-id: 8536e881-5897-4843-b20e-891f734ccb2a
< x-amzn-requestid: 8536e881-5897-4843-b20e-891f734ccb2a
< x-amz-id-2: 8536e881-5897-4843-b20e-891f734ccb2a
<
* Connection #0 to host ubuntu-bucket-production-120920.sos-bg-sof-1.exo.io left intact
[INF] curl.cpp:RequestPerform(2416): HTTP response code 200
[INF] curl.cpp:ReturnHandler(318): Pool full: destroy the oldest handler
Which doesn't output any evident error.
Am I missing anything here? .
Doing the same as root works as expected. However, I'm planning to disable root access so I need to make it work as ubuntu.
To make it work, I had to add the following options:
sudo s3fs test-bucket /home/ubuntu/bucket -o passwd_file=${HOME}/.passwd-s3fs -o url=https://sos-bg-sof-1.exo.io -ouid=1000,gid=1000,allow_other,mp_umask=002
Hope this may help others.
#Sig answer worked for me like a charm.. thanks!

gnutls_handshake() failed: The TLS connection was non-properly terminated

I'm writing a simple akka-http webserver with Scala. I want to establish a TLS connection (webserver authentication) with a curl. I'm following the documentation reachable here . I've first created a certificate and a key for webserver (verified by rootCA.pem created previously), and then I've converted it to a keystore file webserver.ks following this guide. The webserver seems starting correctly but when I do a curl like that:
curl -i -v -XGET "https://localhost:9090/ready" --cacert rootCA.pem -k
I see this error:
Note: Unnecessary use of -X or --request, GET is already inferred.
Trying 127.0.0.1...
Connected to localhost (127.0.0.1) port 9090 (#0)
found 1 certificates in rootCA.pem
found 597 certificates in /etc/ssl/certs
ALPN, offering http/1.1
gnutls_handshake() failed: The TLS connection was non-properly terminated.
Closing connection 0
curl: (35) gnutls_handshake() failed: The TLS connection was non-properly terminated.
I've tried to start webserver also without TLS and it performs without any problem.
How can I solve this?

Catalyst exiting when started with start-stop-daemon

I am trying to run Catalyst on CentOS 7 using start-stop-daemon. Here is the start-stop-daemon command that I run:
start-stop-daemon --start --pidfile /var/run/myapp.pid -d "/home/user/myapp" --exec /opt/perlbrew/perls/perl-5.22.0/bin/perl --startas "/home/user/myapp/script/myapp_fastcgi.pl" --chuid root --make-pid -- "-l :8100 -n 6"
Then I get this error:
Cannot resolve host name -- exiting!
It displays this error after loading the chained actions and printing them to the screen, and after displaying the final message:
[info] myapp powered by Catalyst 5.90112
In /etc/hosts I've tried commenting out any hostnames I thought might be causing an issue:
127.0.0.1 myapp.com myapp.com
#127.0.0.1 localhost.localdomain localhost
#127.0.0.1 localhost4.localdomain4 localhost4
# The following lines are desirable for IPv6 capable hosts
#::1 myapp.com myapp.com
#::1 localhost.localdomain localhost
#::1 localhost6.localdomain6 localhost6
What's strange is that if I don't use start-stop-daemon and I just start the server from the command-line, the server starts fine.
Most likely it can't resolve your hostname.
Check what your hostname command returns and make sure that same host name is present in your /etc/hosts. And don't assign it to loopback, use a real IP.
You can also trace what exactly it's trying to resolve by using this method
https://serverfault.com/questions/666482/how-to-find-out-pid-of-the-process-sending-packets-generating-network-traffic
Or might be even more simple to do tcpdump -s 0 port 53

curl: (6) could not resolve host ;401 Unauthorized on Openstack Swift (SAIO)

I'm trying to set up a 'Swift All In One' system on a Ubuntu 12.04 VM by the link:http://docs.openstack.org/developer/swift/development_saio.html.
I use VMware WorkStation 12 Pro on Win7 64bit system and use 'Host-only' network mode.The VM ip address is "192.168.137.200".
When I run the command on the VM:
curl -v -H 'X-Storage-User: test:tester' -H 'X-Storage-Pass: testing' http://192.168.137.200/auth/v1.0
It works well.
But when I run the command on the host machine(Win7 platform), It fails and returns:
* Could not resolve host: test:tester'; Host not found
* Closing connection #0
curl: (6) Could not resolve host: test:tester'; Host not found
* Could not resolve host: testing'; Host not found
* Closing connection #0
curl: (6) Could not resolve host: testing'; Host not found
* About to connect() to 192.168.137.200 port 80 (#0)
* Trying 192.168.137.200... connected
* Connected to 192.168.137.200 (192.168.137.200) port 80 (#0)
> GET /auth/v1.0 HTTP/1.1
> User-Agent: curl/7.20.1 (amd64-pc-win32) libcurl/7.20.1 OpenSSL/0.9.8n zlib/1.
2.3
> Host: 192.168.137.200
> Accept: */*
>
< HTTP/1.1 401 Unauthorized
< Date: Fri, 25 Mar 2016 05:57:24 GMT
< Content-Length: 131
< Content-Type: text/html; charset=UTF-8
< Www-Authenticate: Swift realm="unknown"
< X-Trans-Id: tx081d67bec35b457bb4cb8-0056f4d343
< Vary: Accept-Encoding
<
<html><h1>Unauthorized</h1><p>This server could not verify that you are authoriz
ed to access the document you requested.</p></html>* Connection #0 to host 192.1
68.137.200 left intact
* Closing connection #0
Then I make another Ubuntu 12.04 VM and try to run the command above on the second VM, it works well.
Try to use X-Auth-User and X-Auth-Key headers instead.https://swiftstack.com/docs/cookbooks/swift_usage/auth.html