Unable to start cassandra : Unable to lock JVM memory (ENOMEM) - docker-compose

I am trying to start a cassandra container using docker-compose . I am issueing the following command from my macbook terminal.
docker-compose -f src/main/docker/cassandra.yml up
my cassandra.yml file is below
version: '2'
services:
primecast-cassandra:
image: cassandra:3.9
# volumes:
# - ~/volumes/jhipster/primecast/cassandra/:/var/lib/cassandra/data
ports:
- 7000:7000
- 7001:7001
- 7199:7199
- 9042:9042
- 9160:9160
primecast-cassandra-migration:
extends:
file: cassandra-migration.yml
service: primecast-cassandra-migration
environment:
- CREATE_KEYSPACE_SCRIPT=create-keyspace.cql
However when i run the docker-compose command to start the cassandra services i get some warning on the terminal & eventually it stops.
primecast-cassandra_1 | WARN 14:33:45 Unable to lock JVM memory (ENOMEM).
This can result in part of the JVM being swapped out, especially with mmapped I/O enabled. Increase RLIMIT_MEMLOCK or run Cassandra as root.
appreciate if you can help
thank you

Essentially, this message is telling you that you need to specify memlock unlimited for the Linux system resource limit on the locked-in memory for a specific user. As per the documentation on recommended production settings, the normal (non-Docker) way to solve for this was to adjust the /etc/security/limits.conf file (or equivalent) as such:
cassandra - memlock unlimited
In the above example, cassandra is the user that Cassandra is running under. If you're running Cassandra as another user, you'll need to either change that accordingly or just use asterisk (*) to enforce that setting for all users. Ex:
* - memlock unlimited
In this case with Docker Compose v2, ulimits should be a valid entry under the service configuration. Set memlock to -1 so that both the soft and hard limits are "unlimited":
primecast-cassandra:
image: cassandra:3.9
ulimits:
memlock: -1
ports:

Related

my docker-compose is failing on this line. Why?

I need to copy a php.ini file that I have (with xdebug enabled) to /bitnami/php-fpm/conf/. I am using a bitnami docker container, and I want to use xdebug to debug the php code in my app. Therefore I must enable xdebug in the php.ini file. The /bitnami/php-fpm container on the repository had this comment added to it:
5.5.30-0-r01 (2015-11-10)
php.ini is now exposed in the volume mounted at /bitnami/php-fpm/conf/ allowing users to change the defaults as per their requirements.
So I am trying to copy my php.ini file to /bitnami/php-fpm/conf/php.ini in the docker-compose.yml. Here is the php-fpm section of the .yml:
php-fpm:
image: bitnami/php-fpm:5.5.26-3
volumes:
- ./app:/app
- php.ini:/bitnami/php-fpm/conf
networks:
- net
volumes:
database_data:
driver: local
networks:
net:
Here is the error I get: ERROR: Named volume "php.ini:/bitnami/php-fpm/conf:rw" is used in service "php-fpm" but no declaration was found in the volumes section.
Any idea how to fix this?
I will assume that your indentation is correct otherwise you probably wouldn't get that error. Always run your yaml's through a lint tool such as http://www.yamllint.com/.
In terms of your volume mount, the first one you have the correct syntax but the second you don't therefore Docker thinks it is a named volume.
Assuming php.ini is in the root directory next to your docker-compose.yml.
volumes:
- ./app:/app
- ./php.ini:/bitnami/php-fpm/conf

Is using non external volume a good way to persist data in dockerized postgres

After reading
How to persist data in a dockerized postgres database using volumes
I know mount a volume to local host folder, is a good way to prevent data loss, in case fatal happens in docker process.
I'm using Windows as host and Linux based docker image. Windows is having problem to mount postgres data (Linux based docker image) to Windows host directory - Mount Postgres data to Windows host directory
Later, I discover another technique
postgres:
build:
context: ./postgres
dockerfile: Dockerfile
restart: always
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
postgres_data: {}
I can use a non external volume (I have no idea what is that yet)
After implement such, I try to perform
docker-compose down
docker-compose up
The data is still there.
May I know,
Is this still a good way to persist data in dockerized postgres?
Where exactly data is being stored? Is it some hidden directory in host machine?
Is this still a good way to persist data in dockerized postgres?
no but olso yes ....postgres is DB so data should be externalized to avoid data-loss in case of connection failure to the container etc...
But a good practice would be to have 2 DB 1 on containers 1 on host or in 2 containers (then with data inside containers) in master/slave mode synchro to have high availability for container maintenance for example but this is high availability only not backup ! :) if it doesn't exist you have to create it of course :-)
Where exactly data is being stored? Is it some hidden directory in
host machine?
No it is where you share the /var/lib/postgres so in your example in a directory called postgres_data on host
(use full path is a good practice & then you saw/guess by yourself where it was define in your file) :)

Metadata fetch failed stack driver logging Google Compute Engine

I am integrating my go application with Stackdriver logging via cloud.google.com/go/logging. My application works perfectly fine when deployed in a GCP on Flex engine. However, when I run my app in local, as soon as I hit localhost:8080 I get the following error on my console and the application gets killed automatically:
Metadata fetch failed: Get http://metadata/computeMetadata/v1/instance/attributes/gae_project: dial tcp: lookup metadata on 127.0.0.
11:53: server misbehaving
My understanding is that when running locally, the code should not try to access Google's internal metadata, which is what is happening above. I dug deeper and looks like this part is handled in the code cloud.google.com/go/compute/metadata/metadata.go. I might be wrong here but it looks like I have to set an env variable for the code to work properly. Pasting from the documentation in metadata.go
// metadataHostEnv is the environment variable specifying the
// GCE metadata hostname. If empty, the default value of
// metadataIP ("169.254.169.254") is used instead.
// This is variable name is not defined by any spec, as far as
// I know; it was made up for the Go package.
metadataHostEnv = "GCE_METADATA_HOST"
If all of my understanding is true, what should I set GCE_METADATA_HOST to? If I am wrong about my understanding, why am I seeing this error? Is it possible that this error has something to do with my Docker and not with Stackdriver logging?
I am running my app with in a container with docker-compose. I am performing go install which generates the binary and then I am simply executing the binary.
EDIT: This is my compose file
version: '3'
services:
dev:
image: <gcr_image>
entrypoint:
- /bin/sh
- -c
- "cat ./config-scripts/config.sh >> /root/.bashrc; bash"
command: bash
stdin_open: true
tty: true
working_dir: /code
environment:
- ENV1=value1
- ENV2=value2
ports:
- "8080:8080"
volumes:
- .:/code
- ~/.npmrc:/root/.npmrc
- ~/.config/gcloud:/root/.config/gcloud
- /var/run/docker.sock:/var/run/docker.sock

Postgres with docker loses connection and gets filesystem issue

I am using Postgres 9.4.1 with a Docker 17.06.0 and docker-compose 1.14.0.
When I work with it, it often happens to just lose connection and in logs I get this:
LOG:  invalid length of startup packet
LOG:  could not send data to client: Broken pipe
FATAL:  connection to client lost
ERROR:  could not open file "base/1/11943": No such file or directory
FATAL:  could not open file "base/12141/11943": No such file or directory
After restarting container it doesn't get any better:
postgres cannot access the server configuration file "/var/lib/postgresql/data/postgresql.conf": No such file or directory
My only solution is to:
Stop container and remove it
Remove all data of volumes
Restart docker
Up container.
To be honest it's quite annoying process it. It happens always when I disconnect with my current network and sometimes just when I work with postgres.
Maybe I'm missing some permission configuration, this is my docker-compose.yml:
postgres:
image: postgres:9.4.1
ports:
- 54320:54320
environment:
- POSTGRES_USER=xxx
- POSTGRES_PASSWORD=xxx
volumes:
- /tmp/postgres/staging:/var/lib/postgresql/data
restart: always

Need some advice dockerizing MongoDB

I am playing with MongoDB and Docker and at this point I am trying to create a useful image for myself to use at work. I have created the following Dockerfile:
FROM mongo:2.6
VOLUME /data/db /data/configdb
CMD ["mongod"]
EXPOSE 27017
And I have added it to my docker-compose.yml file:
version: '2'
services:
### PHP/Apache Container
php-apache:
container_name: "php55-dev"
image: reynierpm/php55-dev
ports:
- "80:80"
environment:
PHP_ERROR_REPORTING: 'E_ALL & ~E_DEPRECATED & ~E_NOTICE'
volumes:
- ~/mmi:/var/www
- ~/data:/data
links:
- mongodb
### MongoDB Container
mongodb:
container_name: "mongodb"
build: ./mongo
environment:
MONGODB_USER: "xxxx"
MONGODB_DATABASE: "xxxx"
MONGODB_PASS: "xxxx"
ports:
- "27017:27017"
volumes:
- ~/data/mongo:/data/db
I have some questions regarding this setup I have made:
Do I need VOLUME /data/db /data/configdb at the Dockerfile or would be enough to have this line ~/data/mongo:/data/configdb at docker-compose.yml?
I am assuming (and I took it from here) that as soon as I build the Mongo image I will be creating a database and giving full permissions to the user with password as it's on the environment variables? I am right? (I couldn't find anything helpful here)
How do I import a current mongo backup (several JSON files) into the database that should be created on the mongo container? I believe I need to run mongorestore command but how? do I need to create an script and run it each time the container start? or should I run during image build? What's the best approach?
Do I need VOLUME /data/db /data/configdb at the Dockerfile or would be enough to have this line ~/data/mongo:/data/configdb at docker-compose.yml?
VOLUME is not required when you are mounting a host directory but it is helpful as metadata. VOLUME does provide some special "copy data on volume creation" semantics when mounting a Docker volume (non host dir) which will impact your data initialisation method choice.
am assuming (and I took it from here) that as soon as I build the Mongo image I will be creating a database and giving full permissions to the user with password as it's on the environment variables? I am right? (I couldn't find anything helpful here)
MONGO_USER, MONGO_DATABASE and MONGO_PASS do not do anything in the official mongo Docker image or to mongod itself.
The mongo image has added support for similar environment variables:
MONGO_INITDB_ROOT_USERNAME
MONGO_INITDB_ROOT_PASSWORD
MONGO_INITDB_DATABASE
How do I import a current mongo backup (several JSON files) into the database that should be created on the mongo container? I believe I need to run mongorestore command but how? do I need to create an script and run it each time the container start? or should I run during image build? What's the best approach?
Whether you initialise data at build or runtime is up to your usage. As mentioned previously, Docker can copy data from a specified VOLUME into a volume it creates. If you are mounting a host directory you probably need to do the initialisation at run time.
mongorestore requires a running server to restore to. During a build you would need to launch the server and restore in the same RUN step. At runtime you might need to include a startup script that checks for existence of your database.
Mongo is able to initialise any empty directory into a blank mongo instance so you don't need to be worried about mongo not starting.