After run the command "docker-compose up" getting below error. Is there any solution.?
Your config is trying to access port 5432. which is unavailable. It is probably busy with another application, for example, PostgreSQL is already running
Related
I have created a Mongo container using only the base mongo:3.6.4 official docker image and deployed it to my OpenShift OKD cluster, but cannot connect to this MongoDB instance using a Mongo client from outside the cluster.
I can access the pod at http://mongodb.my.domain and successfully get the "It looks like you are trying to access MongoDB over HTTP on the native driver port." message.
When using the terminal on the pod I can successfully log-in using:
mongo "mongodb://mongoadmin:pass#localhost" --authenticationDatabase admin
But when trying to connect from outside OKD the connection fails.
My client needs to pass through a proxy before it can access the OKD pods and I do have a .der certificate file but am unsure if this is related to the issue.
Some commands I have tried:
mongo "mongodb://mongoadmin:pass#mongodb.my.domain:80" --authenticationDatabase admin
mongo --ssl "mongodb://mongoadmin:pass#mongodb.my.domain:80" --authenticationDatabase admin
I expected to be able to connect successfully but instead get this error message:
MongoDB shell version v3.4.20
connecting to: mongodb://mongoadmin:pass#mongodb.my.domain:80
2019-05-15T11:32:25.514+0100 I NETWORK [thread1] recv(): message len 1347703880 is invalid. Min 16 Max: 48000000
2019-05-15T11:32:25.514+0100 E QUERY [thread1] Error: network error while attempting to run command 'isMaster' on host 'mongodb.my.domain:80' :
connect#src/mongo/shell/mongo.js:240:13
#(connect):1:6
exception: connect failed
I am unsure if it an issue with how I am using my MongoDB client or potentially some proxy settings on my OKD cluster. Any help would be appreciated.
The problem here is that external OpenShift routes aren't great at handling database connections. When you attempt to connect to the Mongo pod via the route, the route will accept the connection and transmit your connection to the Mongo service. I believe this transmission wraps the connection in in a HTTP wrapper, which Mongo doesn't like to handle. The OKD documentation highlights that path based route traffic should be HTTP based, which will cause the connection to fail.
You can see evidence of this when trying to connect to a MongoDB database and it returns "It looks like you are trying to access MongoDB over HTTP on the native driver port." to the browser. The user relief.malone explains this and has proposed a couple of solutions / workarounds in their answer to this question.
To add to relief.malone's answer, I would suggest that you port forward from the MongoDB pod to your local machine for development/debugging. In production, you could deploy an application to OKD that references the MongoDB service via it's internal DNS name, which will look something like this: mongodb.project_namespace.svc:27017. This way you will avoid the route interfering with the connection.
The Openshift OKD documentation on port-forwarding isn't that informative, but, since oc runs the kubectl command under the hood, you can read this Kubernetes guide to get some more information
We have been trying to setup concourse 5.0.0 (we already set up 4.2.2) in our AWS. We have created two instances one is for web and another is for worker. We are able to see the site up and running but we are not able to run our pipeline. we checked the logs and noticed that worker throwing the below error.
Workerr.beacon.forward-conn.failed-to-dial","data":{"addr":"127.0.0.1:7777","error":"dial tcp 127.0.0.1:7777: connect: connection refused","network":"tcp","session":"9.1.4"}}
We are assuming worker is struggling to connect to web instance and wondering if this could be due to missing gdn configuration. Concourse 5.0.0 release included both concourse and gdn binaries. we want to try --garden-config file to see if that fixes the problem.
can somebody suggest how do we write garden config file ?
I had this same problem and solved it using #umamaheswararao-meka's answer. (Using ubuntu 18.04 on EC2)
Also had a problem with containers not being able to resolve domain names (https://github.com/docker/libnetwork/issues/2187). Here is the error message:
resource script '/opt/resource/check []' failed: exit status 1
stderr:
failed to ping registry: 2 error(s) occurred:
* ping https: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
* ping http: Get http://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
What I did:
sudo apt-get install resolvconf -y
# These are cloudflare's DNS servers
sudo echo "nameserver 1.1.1.1" >> /etc/resolvconf/resolv.conf.d/tail
sudo echo "nameserver 1.0.0.1" >> /etc/resolvconf/resolv.conf.d/tail
sudo resolvconf -u
cat /etc/resolv.conf # just to make sure changes are in place
# restart concourse service
Containers make use of resolv.conf and as the file is generated dynamically on ubuntu 18.04 this was the easiest way of making containers inherit this configuration.
Also relevant snippets from man resolvconf
-u Just run the update scripts (if updating is enabled).
/etc/resolvconf/resolv.conf.d/tail
File to be appended to the dynamically generated resolver configuration file. To append
nothing, make this an empty file. This file is a good place to put a resolver options
line if one is needed, e.g.,
it was the issue with gdn(garden binary) which was not configured. we had to include CONCOURSE_BIND_IP=xx.xx.x.x ( IP where your gdn is located) and CONCOURSE_BIND_PORT=7777( gdn's port) in wroker.env file. Which solved the problem for us.
I am using Orion and Mongo with Docker, installed as the Fastest Way section of the documentation. All of them are in the same server.
I am able to connect them, and deal with entities and subscriptions (create, update, delete working fine), using volume and persisting my data even after rebooting everything. The annoying part is that Orion continuously send the error message:
mongoConnectionPool.cpp[194]: Database Error (connection failed, after
100 retries: 'couldn't connect to server localhost:27017 (127.0.0.1)
failed, connection attempt failed'
Why does Orion give this message if he actually do connect and update information in Mongo? What this message imply and how can I remove it?
Notes:
contextBroker --version: 0.26.1
Docker version 1.10.3, build 20f81dd
mongod --version: db version v2.6.11
#Cortwave pointed me out to the solution of this issue.
I do have a link to orion and mongo in my docker-compose.yml file. It's a line under the orion's section:
orion:
command: -dbhost mongo
But when I stop only the orion container docker stop orion, and start it again docker start orion, the link is missed.
To fix this, I can stop and start both the containers with docker-compose stop/start or, when I stop only orion, I can insert the db information with docker start orion -dbhost mongo when start it.
I keep getting errors when trying to serve files locally. I am using Tomcat on port 8080.
When using Eclipse, I get the following error message:
Several ports (8080, 8009) required by Tomcat v8.0 Server at localhost are already in use. The server may already be running in another process, or a system process may be using the port. To start this server you will need to stop the other process or change the port number(s).
Question
How do I stop the server on port 8080 if I don't know which process started it?
Try to go with a web browser to:
localhost:8080 or 127.0.0.1:8080
and
localhost:8009 or 127.0.0.1:8009
There you could see which service is running on those ports.
Then it will be more simple to understand what you have to stop.
EDIT:
You could use a prompt and the command:
netstat -b
-b it will show the name of the executable running on a port.
For understanding how it works here a good explanation.
I have stopped a virtual machine with CentOS running an instance of Context Broker. Upon relaunching the system with the enabler, the latter gives a Fatal Error. See the below log:
# contextBroker
INFO#13:18:32 contextBroker.cpp[1348]: Orion Context Broker is running
INFO#13:18:32 mongoGlobal.cpp[164]: Successful connection to database
INFO#13:18:32 contextBroker.cpp[1157]: Connected to mongo at localhost:orion
INFO#13:18:32 mongoGlobal.cpp[483]: Database Operation Successful ({ conditions.type: "ONTIMEINTERVAL" })
INFO#13:18:32 rest.cpp[901]: Fatal Error (error starting REST interface)
I'm working on the 4.1.2 version of Orion, CentOS 6 running in VirtualBox. Running with su because I get a permission denied on a log file error. For info, I have enabled bridging network connection just before the VM reboot.
Is it the fact that the broker was not closed correctly that there is something blocking its restart? (PS. yes I know that there is a nearly exact same error message in the administration guide, but I don't see any solutions there)
Thank you!
EDIT: one solution that works is uninstalling the contextBroker package and installing it again. I wish there was a cleaner way!
EDIT: this problem reproduces every time I kill the contextBroker application - then every time restarts don't help, reinstalling the package does.
Make sure there is no other instance of the broker running (ps aux | grep contextBroker), using the same port.
If there is another instance of the broker running, then the port will be taken and the REST initialization will fail.
About running as root because of log-file permissions ... Why not simply change the owner of the log-file instead?