Using MUP returned a container Errors about nonexistent endpoints and containers are normal - mongodb

I have been using this MUP config for the deployment until recently. When I encountered an issue and I had to stop, reboot the Instance multiple times.
Then, This causes the meteor app container to shut down and the MongoDB container is running just fine but wasn't accessible through an SSH tunnel on a MongoDB GUI (but running systemctl status mongo shows active status: activating.
I troubleshoot and run docker ps -a. It shows the MongoDB container only as a running container and the meteor app container completely shutdown.
I tried running the MUP deployment in an attempt to get the meteor app container up and running.
However, I got an error Removing docker containers. Errors about nonexistent endpoints and containers are normal.
I run the mup setup command successfully and then I tried running mup reconfig and I got the same above error, I have attached the screenshot of the error below.
To Reproduce this error
Create a meteor app with Iron-meteor.
Setup an Instance (Ec2).
Setup Deployment with Meteor-up
Deploy your app with Meteor-up.
SSH into the instance and run cmd docker ps. Should see at least two running containers, app and mongo respectively.
Run a cmd to stop the app container while the mongo container is running.
Finally, Goto your project and redeployed with mup
Should see a similar error as above. for step 6 restarting the instance in my case shut down the two containers and I was able to get the mongo container back up and running.
However, I couldn't get the app container running, so I tried redeploying with the expectation that a new app container would be created if it doesn't exist on the instance.
UPDATED!

I don't know if this will help, but in my experience, mup likes a fresh instance better than an existing one.
My first step would be a mup stop command. This will shut down the docker instances. Then you can remove them with docker rm, and you can remove the images with docker rmi. Then do a mup setup again, followed by a mup deploy.
If the first one doesn't work, you can basically start with a fresh vm, as in the droplet or ec2 instance. This is generally quite successful.

Related

Not able to restart MongoDB container

We are running MongoDB as docker container and all of sudden it become unresponsive. We had tried to restart container but it didnt. The docker restart command was success but if we check the service it didnt restarted. We are not able to connect to DB from our Mongo client too.
After full VM restart the problem got solved. I would like to know if anyone has faced this issue and got any solution for it.
Here is the details my docker:
Docker version 19.03.12
Mongo 4.0

Docker containers cannot bind to network

I usually use the default network for docker containers, and I had a mongo database running in one just fine and the port was exposed to the network successfully. Then, I tried to attach a new python container to that container using the --link option (yes, I now realize that that is deprecated). An error was thrown, and in my hubris, I didn't capture it, I just went on. Now, when I try to start my mongo database, it fails saying that it can't bind the network. "Failed to set up listener: SocketException: Permission denied"
I removed the container and tried to re-create it, but no luck. I've put this into a permanent state of bad. Any suggestions on how to fix this so I can get my database back?
Thanks.
Edit: Should have mentioned, Ubuntu 20.04, Docker 19.03.11
Also, this only seems to be a problem with any new mongo containers. I can start postgres, and web servers, etc without issues.
Turns out, whatever that error was when I tried to use --link, it had corrupted the mongo image on my machine, so all new instances of that image failed to connect to the network. That's why removing the container and recreating it didn't fix the problem. I needed to delete the local mongo image, and re-pull from the docker hub.

PgAdmin 4: Save Server Connection Details

Running pgAdmin 4.2.0 in a Docker container using the image dpage/pgadmin4, I notice that server connections are not being saved.
The container is created with the volume mapping:
./data/pgadmin:/root/.pgadmin
When the docker container is restarted, or when a user re-login to the dashboard, all the previously entered server connection details are gone.
How can we ensure that the connection details are properly saved?
For anyone still with this issue, I got it working using container folder /var/lib/pgadmin, so in your yml volume:
./data/pgadmin:/var/lib/pgadmin
Do you start it with docker run dpage/pgadmin4 or docker start {containerID}?
When I was doing it with docker run I had similar issues as you, but when I changed to docker start all connection details were normally there (even after computer reboot).

Docker Compose - one specific container randomly doesn't start properly

I have a docker environment with 5 containers that are composed via docker compose. Now only on mac machines and only sometimes (seems completely random) 1 of these 5 container doesn't start.
The weird thing about it is, that docker ps says the container is running and I can connect to it. Inside the container is a JBoss server and ps says that there is a process that runs the JBoss. BUT in fact the JBoss is not up and running. There is no logging in the docker compose console and JBoss not accessible.
There is also the problem that if this happens the whole docker-compose process cannot be canceled properly anymore. All containers shutdown and also can be forced to shutdown but the JBoss container. Then the docker-machine hangs up.
I didn't find any hint in the interwebs ... please help !
It seems that the process running inside the container is in a weird state.
Try killing it without providing a grace period, or removing the container.
stop : Stop a container by sending SIGTERM and then SIGKILL after a
grace period
--help=false Print usage
-t, --time=10 Seconds to wait for stop before killing it
kill : Kill a running container using SIGKILL or a specified signal
--help=false Print usage
-s, --signal="KILL" Signal to send to the container
rm : Remove one or more containers
-f, --force=false Force the removal of a running container (uses SIGKILL)
--help=false Print usage
-l, --link=false Remove the specified link
-v, --volumes=false Remove the volumes associated with the container
Moreover try checking the logs of the container:
docker logs --follow <container_name or container_id>
After updating to docker v1.10 the problem didn't occur anymore :)

mongodb replica set on azure vms - configure to run as a service

I've completed this tutorial and successfully deployed a 3 node replica set. I can connect to it from other hosts and all is good. The question I have is that in the tutorial it states
Start MongoDB
Once the configuration files have been edited, start the database process manual:mongod on each instance by:
Log on onto the instance
Run the following command to start the process:
mongod --config /etc/mongod.conf
This should start the manual:mongod process
To me this seems as though the replica set is running as a user process and not as a system service as in the command
sudo service mongodb start
So what happens if one of the machines reboots? Is that process dead? How can I configure the whole replica set to run as a service?
On machine reboots, the mongod process will stop and you have to restart it.
In system scripts, I am not sure if on box restarts, mongod restart is automatically taken care of or not. But you can have service scripts for mongod process, which you get automatically, when you install using mongodb apt-get/yum packages.