I have created a docker container image on my Bluemix private registry. My docker image (Dockerfile) contains the following things-
FROM registry.ng.bluemix.net/ibmliberty
RUN rm -rf /opt/ibm/wlp/usr/servers/defaultServer/server.xml
ADD server.xml /opt/ibm/wlp/usr/servers/defaultServer/server.xml
RUN rm -rf /opt/ibm/wlp/usr/servers/defaultServer/workarea
ADD ./build/libs/*.war /opt/ibm/wlp/usr/servers/defaultServer/apps
ENV LISCENSE accept
I also created a mongoDB service to attach with my container.
When I am done with creating my container and binding it with the mongoDB service, then I hit containers (IP/URL). My application is running but not connecting to the database. In container logs I am getting socketTimeout exception.
logs-
0[err] at java.lang.Thread.run(Thread.java:785)
[err] com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting
for a server that matches ReadPreferenceServerSelector{readPreference=primary}.
Client view of cluster state is {type=UNKNOWN, servers=[{address=50.23.230.160:1
0082, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenExce
ption: Exception opening socket}, caused by {java.net.SocketTimeoutException: co
nnect timed out}}]
I have included the db parameter in my server.xml, and also tried different approaches to give the db parameter while creating the container by specifying Environment variable. These parameters contain almost all information like host,port,db,uname,pwd.
After trying lots of approaches in different ways, I am not able to find what I am doing wrong. Please help me to resolve this issue.
Related
I'm trying to get RabbitMQ to monitor a postgresql database to create a message queue when database rows are updated. The eventual plan is to feed this message queue into an AWS EKS (Elastic Kubernetes Service) cluster as a job.
I've read many many approaches to this but they are still confusing as a newcomer to RabbitMQ and many seemed to be written more than 5 years ago so I'm not sure if they'll still work with current versions of postgres and rabbitmq.
I've followed this guide about installing the area51/notify-rabbit docker container which can connect the two via a node app, but when I ran the docker container it immediately stopped and didn't seem to do anything.
There is also this guide, which uses a go app to connect the two, but I'd rather not use Go ouside of a docker container.
Additionally, there is also this method, to install the pg_amqp extension from a repository which hasn't been updated in years, which allows for a direct connection from PostgreSQL to RabbitMQ. However, when I followed this and attempted to install pg_amqp on my Postgres db (postgresql 12), I was unable to connect using psql to the database, getting the classic error:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
My current set-up, is I have a rabbitMQ server installed in a docker container in an AWS EC2 instance which I can access via the internet. I ran the following to install and run it:
docker pull rabbitmq:3-management
docker run --rm -p 15672:15672 -p 5672:5672 rabbitmq:3-management
The postgresql database is running on a separate EC2 instance and both instances have the required ports open for accessing data from each server.
I have also looked into using Amazon SQS as well for this, but it didn't seem to have any info on linking Postgresql up to it. I haven't really seen any guides or Stack Overflow questions on this since 2017/18 so I'm wondering if this is still the best way to create a message broker for a kubernetes system? Any help/pointers on this much appreciated.
In the end, I decided the best thing to do was create some simple Python scripts to do the LISTEN/NOTIFY steps and route traffic from PostgreSQL to RabbitMQ based off the following code https://gist.github.com/kissgyorgy/beccba1291de962702ea9c237a900c79
I set it up inside Docker containers and set them to run in my Kubernetes cluster so they are within the automatic restarts if they fail.
I usually use the default network for docker containers, and I had a mongo database running in one just fine and the port was exposed to the network successfully. Then, I tried to attach a new python container to that container using the --link option (yes, I now realize that that is deprecated). An error was thrown, and in my hubris, I didn't capture it, I just went on. Now, when I try to start my mongo database, it fails saying that it can't bind the network. "Failed to set up listener: SocketException: Permission denied"
I removed the container and tried to re-create it, but no luck. I've put this into a permanent state of bad. Any suggestions on how to fix this so I can get my database back?
Thanks.
Edit: Should have mentioned, Ubuntu 20.04, Docker 19.03.11
Also, this only seems to be a problem with any new mongo containers. I can start postgres, and web servers, etc without issues.
Turns out, whatever that error was when I tried to use --link, it had corrupted the mongo image on my machine, so all new instances of that image failed to connect to the network. That's why removing the container and recreating it didn't fix the problem. I needed to delete the local mongo image, and re-pull from the docker hub.
I have created a Mongo container using only the base mongo:3.6.4 official docker image and deployed it to my OpenShift OKD cluster, but cannot connect to this MongoDB instance using a Mongo client from outside the cluster.
I can access the pod at http://mongodb.my.domain and successfully get the "It looks like you are trying to access MongoDB over HTTP on the native driver port." message.
When using the terminal on the pod I can successfully log-in using:
mongo "mongodb://mongoadmin:pass#localhost" --authenticationDatabase admin
But when trying to connect from outside OKD the connection fails.
My client needs to pass through a proxy before it can access the OKD pods and I do have a .der certificate file but am unsure if this is related to the issue.
Some commands I have tried:
mongo "mongodb://mongoadmin:pass#mongodb.my.domain:80" --authenticationDatabase admin
mongo --ssl "mongodb://mongoadmin:pass#mongodb.my.domain:80" --authenticationDatabase admin
I expected to be able to connect successfully but instead get this error message:
MongoDB shell version v3.4.20
connecting to: mongodb://mongoadmin:pass#mongodb.my.domain:80
2019-05-15T11:32:25.514+0100 I NETWORK [thread1] recv(): message len 1347703880 is invalid. Min 16 Max: 48000000
2019-05-15T11:32:25.514+0100 E QUERY [thread1] Error: network error while attempting to run command 'isMaster' on host 'mongodb.my.domain:80' :
connect#src/mongo/shell/mongo.js:240:13
#(connect):1:6
exception: connect failed
I am unsure if it an issue with how I am using my MongoDB client or potentially some proxy settings on my OKD cluster. Any help would be appreciated.
The problem here is that external OpenShift routes aren't great at handling database connections. When you attempt to connect to the Mongo pod via the route, the route will accept the connection and transmit your connection to the Mongo service. I believe this transmission wraps the connection in in a HTTP wrapper, which Mongo doesn't like to handle. The OKD documentation highlights that path based route traffic should be HTTP based, which will cause the connection to fail.
You can see evidence of this when trying to connect to a MongoDB database and it returns "It looks like you are trying to access MongoDB over HTTP on the native driver port." to the browser. The user relief.malone explains this and has proposed a couple of solutions / workarounds in their answer to this question.
To add to relief.malone's answer, I would suggest that you port forward from the MongoDB pod to your local machine for development/debugging. In production, you could deploy an application to OKD that references the MongoDB service via it's internal DNS name, which will look something like this: mongodb.project_namespace.svc:27017. This way you will avoid the route interfering with the connection.
The Openshift OKD documentation on port-forwarding isn't that informative, but, since oc runs the kubectl command under the hood, you can read this Kubernetes guide to get some more information
I try to run a Seyren instance locally, and I would like to do it using the dockerized MongoDB image.
After I pulled the latest docker image, I run it and expose the 27017 port:
docker run --name mongodb -v /data/db:/data/db -p 27017:27017 -d mongo
Next thing to do is compile the seyren jar file and passing it some variables. docker.local is mapped to the IP of the docker toolbox in /etc/hosts
java -jar seyren-1.3.0.jar GRAPHITE_URL=https://graphiteurl.io MONGO_URL=mongodb://docker.local:27017
But I then got the following errors:
30/03/2016 13:58:02.643 [localhost-startStop-1]
INFO com.seyren.mongo.MongoStore - Ensuring that we have all the indices we need 30/03/2016 13:58:12.661 [localhost-startStop-1]
ERROR
com.seyren.mongo.MongoStore - Failure while bootstrapping Mongo
indexes. If you've hit this problem it's possible that you have two
checks which are named the same and violate an index which we've tried
to add. Please correct the problem by removing the clash. If it's
something else, please let us know on Github!
com.mongodb.MongoTimeoutException: Timed out after 10000 ms while
waiting for a server that matches AnyServerSelector{}. Client view of
cluster state is {type=Unknown, servers=[{address=localhost:27017,
type=Unknown, state=Connecting,
exception={com.mongodb.MongoException$Network: Exception opening the
socket}, caused by {java.net.ConnectException: Connection refused}}]
What do I do I miss here?
EDIT:
The thing when I compile the seyren jar file. I indeed have a seyren database which is created in my mongo instance ... So there must be a connection established.
As I understood you are using docker toolbox on Mac. Docker toolbox is running not in your localhost (it's running in instance of VBox virtual machine). And you should to use the IP of this machine instead of localhost. You can get it using docker-machine env command in terminal. IP of DOCKER_HOST env variable will be the IP of your mongoDB instance host.
Found the solution. I had to use mongo:2.7 image since this is the only workable one.
I am using docker-compose to run Orion+Mongo.
Then, I am starting accumulator-server with:
drasko#Lenin:~/fiware/fiware-orion/scripts$ ./accumulator-server.py 1028 /accumulate on
verbose mode is on
* Running on http://0.0.0.0:1028/ (Press CTRL+C to quit)
However, running a gives an error:
orion_1 | WARNING#21:27:21 httpRequestSend.cpp[438]: Notification failure for localhost:1028 (curl_easy_perform failed: Couldn't connect to server)
Could it be due the fact that Orion is run in the Docker and how to solve this problem?
Your problem is that your Orion host cannot reach the accumulator.
In the subscription you are using a reference field with the value "localhost", so Orion is looking in its localhost relative to the docker container.
To solve it, you should either run the accumulator inside the docker image or make Orion able to contact the accumulator in some other way than "localhost" (maybe run accumulator in a different container and link it using docker compose).