After installing Ubuntu v20 and then installing docker:
$ docker network create test-network
$ docker pull mongo
$ docker run --network test-network --name mongodb \
-e MONGO_INITDB_ROOT_USERNAME=admin \
-e MONGO_INITDB_ROOT_PASSWORD=pawwrord \
mongo
I got an error like this:
/usr/local/bin/docker-entrypoint.sh: line 381: 25 Illegal instruction (core dumped) "${mongodHackedArgs[#]}" --fork
Do you know what the problem is? I just need some guidance to investigate the problem.
UPDATE
I don't have any problem with other docker hub images.
Specifically only when I want to run mongo, I got this error.
MongoDB 5.0 requires a Sandy Bridge or newer CPU. Get a newer processor or use an older version of MongoDB.
https://jira.mongodb.org/browse/SERVER-54407
I have exactly same problem with yours, i try to run on my local computer work but not in VM.
thanks to this I simply changed mongo's version to 4.4 from latest which is 5.0 and running well. Maybe need newer CPU
It's a shortcoming in the available instructions of your CPU. Mongo 5.0 requires CPU instructions only available in Sandy Bridge era or more recent.
I had this problem using a Proxmox VM, which defaults to a simpler virtual CPU. Simply changing the virtual CPU to type "host" fixed the problem, by allowing the VM CPU to include all the instructions of the host CPU.
If you are running directly on metal, you'll need a newer CPU. Good luck!
Related
I changed the mongod.conf.orig of the mongo running on ECS, but when I restart, the changes are gone.
Here's the details:
I have a mongodb running on ECS, it always crashes due to out of memory.
I have found the reason, I set the ECS memory to 8G, but because the mongo is running in a container, it detected a higher memory.
when I run db.hostInfo()
I got the memSizeMB higher than 16G.
It caused that when I run db.serverStatus().wiredTiger.cache
I got a "maximum bytes configured" higher than 8G
so I need to reduce the wiredTigerCacheSizeGB in config file.
I used the command line copilot svc exec -c /bin/sh -n mongo to connect to it.
Then I found a file named mongod.conf.orig.
I ran apt-get install vim to install vi and edit this file mongod.conf.orig.
But after I restart the mongo task, all my changes are gone. include the vi I just installed.
Did anyone meet the same problem? Any information will be appreciated.
ECS containers has ephemeral storage. In your case, you could create an EFS and mount it in a container, then share the configuration.
If you use CloudFormation, look at mount points.
Using Ubuntu 18.04.
I am trying to install a kubernetes cluster on my local machine (localhost) using this guide (LXD + conjure-up kubernetes):
https://kubernetes.io/docs/getting-started-guides/ubuntu/local/#before-you-begin
When I run:
conjure-up kubernetes
I select the following installation:
and select localhost for "Choose a cloud" and use the defaults for the rest of the install wizard. It then starts to install and after 30-40 minutes it completes with this error:
Here is the log:
https://pastebin.com/raw/re1UvrUU
Where one error says:
2018-07-25 20:09:38,125 [ERROR] conjure-up/canonical-kubernetes - events.py:161 - Unhandled exception in <Task finished coro=<BaseBootstrapController.run() done, defined at /snap/conjure-up/1015/lib/python3.6/site-packages/conjureup/controllers/juju/bootstrap/common.py:15> exception=BootstrapError('Unable to bootstrap (cloud type: localhost)',)>
but that does not really help much.
Any suggestion to why the install wizard/conjure-up fails?
Also based on this post:
https://github.com/conjure-up/conjure-up/issues/1308
I have tried to first disable firewall:
sudo ufw disable
and then re-run installation/conjure install wizard. But I get the same error.
Some more details on how I installed and configured LXD/conjure-up below:
$ snap install lxd
lxd 3.2 from 'canonical' installed
$ /snap/bin/lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (btrfs, ceph, dir, lvm) [default=btrfs]:
Create a new BTRFS pool? (yes/no) [default=yes]:
Would you like to use an existing block device? (yes/no) [default=no]:
Size in GB of the new loop device (1GB minimum) [default=26GB]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
Configured group membership:
sudo usermod -a -G lxd $USER
newgrp lxd
Next installed:
sudo snap install conjure-up --classic
And then ran installation:
conjure-up kubernetes
I wasn't able to reproduce your exact problem but i got conjure-up + lxd installed and in the end Kubernetes on my newly installed VirtualBox Ubuntu 18.04 (Desktop) VM. Hopefully this answer could help you somehow!
I looked through the kubernetes.io documentation page and that one lacked tiny bits of information, it does mention lxd but not the part with lxd init which i assume you picked up in the conjure-up user manual.
So with that said, i followed the conjure-up user manual with some minor changes on the way. I'm assuming that it's OK for you to use the edge version of conjure-up, i started off with the stable one but changed to edge when testing different combinations.
Also please ensure that you have the recommended resources available stated by the user manual, conjure-up and the Canoncial Distribution of Kubernetes launches a number of containers for you. You might not need 3 x etcd, 3 x worker nodes and 2 x Master, and if you don't just tune the number of containers down in the conjure-up wizard.
These are the steps i performed (as my local user):
Make sure your Ubuntu box are updated: sudo apt update && sudo apt upgrade
Install conjure-up by running: sudo snap install conjure-up --classic --edge
Install lxd by running: sudo snap install lxd
With lxd comes the client part which is lxc, if you run e.g. lxc list you should get an empty table (no containers started yet). I got an permission error at this time, i ran the following: sudo chown -R lxd:lxd /var/snap/lxd/ to change owner and group of the lxd directory containing the socket you'll be communicating with using lxc.
Add your user to the lxdgroup: sudo usermod -a -G lxd $USER && newgrp lxd, log off and on to make this permanent and not only active in your current shell.
Now create a lxd bridge manually with the following command: lxc network create lxdbr1 ipv4.address=auto ipv4.nat=true ipv6.address=none ipv6.nat=false
Now let's run the init part of lxd with lxd init. Remember to answer no when being asked to create a new local network bridge?, in the next prompt provide your newly created network bridge instead (lxdbr1). The rest of the answers to the questions can be left as default.
Now continue with running conjure-up kubernetes and choose localhost as your type. For me the localhost choice was greyed out from the beginning, it worked when i created the network bridge manually and not via the lxd init step.
Skip the additional components you can install like Rancher, Prometheus etc.
Choose your new network bridge and the default storage pool, proceed to the next step.
In the next step customize your Kubernetes cluster if needed and then hit Deploy. And now you wait!
You can always troubleshoot and list all containers created with the lxc tool. If you've ever used Docker the lxc tool feels a lot like the docker client.
And finally some thoughts and observations, there's a lot of moving parts to conjure-up as you might have seen. It's actually described as: conjure-up is a thin layer spanning a few different underlying technologies - Juju, MAAS and LXD.
For reference, i ended up having the following versions installed:
lxd version 3.3
conjure-up version 2.6.1
There is a datastorage, an mysql container, a php and a nginx. Is it possible to let these processes run on different oses?
So one is on debian, the other on centos and so on?
Example
this one is debian
docker run --name sql -d buildsql
this one is centos
docker run --name php --linked sql:db -d buildphp
Containers talk to each other over the network, so they are normally unaware of the OS being used by other containers, in exactly the same way that your browser doesn't really care about the OS of the webservers it talks to.
Most of the official images are based on Debian, so you quite often find your containers are all running Debian, but there's no need for this to be true. Some containers don't have an OS at all and just contain a binary that gets run when the container starts.
In short, there is no problem in using different OSs, unless you have some funky application specific problem with networking.
I have mongo db 2.6.9 installed in my ubuntu machine.I want to have 2.6.10 also installed on the same machine and only run one at a time.
I don't have much knowledge of mongodb. How can i do it?
Talking for linux, if you just install it twice in different paths (REMEMBER to change the port) i think they will run fine. the problem is that you will end up using the same log file for both instances. What I suggest you is you take a look at the two following linkes:
stack here
and also on mongo manual in order to see the run time configuration where you will see what paths you have to change for the second running installation here
Since you are already on ubuntu, install docker and create containers running as many mongo versions as you want in different containers. You will just have to bind them to different ports on the local machine.
Install docker
sudo apt-get update $ sudo apt-get install wget
wget -qO- https://get.docker.com/ | sh
Create mongodb container running mongo 2.6.9 and 2.6.10
docker run -p 27107:27107 -d mongo:2.6.9
docker run -p 27108:27107 -d mongo:2.6.10
Connect to mongodb version 2.6.9 using 127.0.0.1 on port 27107
Connect to mongodb version 2.6.10 using 127.0.0.1 on port 27108
This is really frustrating me. I have a DO VPS with ubuntu 14.04 (64) installed.
I installed VestaCP as control panel on that and have hosted some PHP based personal project.
I also installed meteor on it but never used, now when I am trying to create a project and run it ('meteor create rt' then 'cd rt' then 'meteor')
It is giving the following error :
[[[[[ /home/admin/code/rt ]]]]]
=> Started proxy.
Unexpected mongo exit code 1. Restarting.
Unexpected mongo exit code 1. Restarting.
Unexpected mongo exit code 1. Restarting.
Can't start Mongo server.
root#RD:/home/admin/code/rt#
Could anyone please help? Please ask me for more informations if required.
**** EDIT ****
I created a fresh DigitalOcean server and it is giving the same error on that. Some issue with Digital Ocean? File System of Digital Ocean? I am confused. I am trying it on different flavours of Linux and same result. All are fresh linux installations.
I finally got the solution. Posting it here for others.
This was the problem as a few environment variables which mongodb looks for while starting was not set
Set the variables LC_ALL and LANG and it works fine (mostly setting LC_ALL will do)
first, type locale command and see the output, you will see that it will say something about LC_ALL not set.
Now, add these two lines in /etc/environment and it worked.
LC_ALL=en_US.UTF-8
LANG=en_US.UTF-8
This solution is for Ubuntu 12.04 +
Other variants may require similar work.
Unexpected mongo exit code 1 is still an uncaught exception as far as i think.
You can try by updating your c/c++ compilers uptodate. Have a look here.
It says :
sudo add-apt-repository ppa:ubuntu-toolchain-r/test
sudo apt-get update
sudo apt-get install gcc-4.6
sudo apt-get install g++-4.6
All the best!
So we have narrowed the issue down to meteor's mongo installation on your box (though I think we were pretty sure of this all along). Let's attempt to debug that a bit. The way I have done this in the past is to try to open meteor's mongo with the mongod provided by meteor. You will perform these procedures without running the meteor server. This should give you the warning that is causing Mongo to exit. First you need to find this. In my instance installed on Mint (which should be similar to Ubuntu) it is at:
~/.meteor/packages/meteor-tool/.1.1.3.4sddkj++os.linux.x86_64+web.browser+web.cordova/mt-os.linux.x86_64/dev_bundle/mongodb/bin/mongod
You can look at that location on your Ubuntu box or you can run something like this to get the location:
find ~/.meteor/ -name mongod
Once you find the location then go to the directory of your meteor project you are attempting to run and in that directory you should find this location:
<your meteor project>/.meteor/local
cd into that directory and run the following command:
~/.meteor/packages/meteor-tool/.1.1.3.4sddkj++os.linux.x86_64+web.browser+web.cordova/mt-os.linux.x86_64/dev_bundle/mongodb/bin/mongod --dbpath ./
From there you can analyze the output (or update the question so we can see the output) and this should show you the mongo error you are receiving on startup and allow you to fix it.
I've got the same issues trying to start a meteor app and exactly the mongodb server is being terminated in an unexpectly manner. Generally the virtual linux server from some dealers like the one you mentioned are coming without a swap partition (check in /etc/fstab file) so if you have not enough memory to allocate MongoDB server then meteor app can't be started. You can create a swap partition or instal swapspace
sudo apt-get install swapspace
After that I was able to start the meteor app... Just be patient as swap memory is not as faster as RAM.
Since due some "smart" StackExchange policy I cannot up-vote or comment to working solution...)
Quoted answer works also on Digital Ocean on CentOS 7 x64 vmlinuz-3.10.0-123.8.1.el7.x86_64
first, type locale command and see the output, you will see that it will say something about LC_ALL not set.
Now, add these two lines in /etc/environment and it worked.
I changed the locale setting to match my needs.
Fixed on my Debian 8 with the following bash command, (use sudo if needed)
localedef -i en_US -f UTF-8 en_US.UTF-8