Mystery "guest" user for rabbitMQ - docker-compose

I know the "guest" user is the default for RabbitMQ, but I thought I'd configured everything to use different names.
My stack is Django / Celery / RabbitMQ, running in Docker.
First up, the error - I jst get loads of these - every few seconds:
rabbitmq_1 | 2020-07-29 08:28:00.775 [warning] <0.1234.0> HTTP access denied: user 'guest' - invalid credentials
rabbitmq_1 | 2020-07-29 08:28:05.775 [warning] <0.1240.0> HTTP access denied: user 'guest' - invalid credentials
rabbitmq_1 | 2020-07-29 08:28:10.776 [warning] <0.1246.0> HTTP access denied: user 'guest' - invalid credentials
rabbitmq_1 | 2020-07-29 08:28:15.776 [warning] <0.1252.0> HTTP access denied: user 'guest' - invalid credentials
rabbitMQ Dockerfile
FROM rabbitmq:management-alpine
ENV RABBITMQ_USER rabbit_user
ENV RABBITMQ_PASSWORD rabbit_user
ADD rabbitmq.conf /etc/rabbitmq/
ADD definitions.json /etc/rabbitmq/
RUN chown rabbitmq:rabbitmq /etc/rabbitmq/rabbitmq.conf /etc/rabbitmq/definitions.json
CMD ["rabbitmq-server"]
rabbitmq.conf
management.load_definitions = /etc/rabbitmq/definitions.json
definitions.json
{
"users": [
{
"name": "rabbit_user",
"password": "rabbit_user",
"tags": ""
},
{
"name": "admin",
"password": "admin",
"tags": "administrator"
}
],
"vhosts": [
{
"name": "\/phoenix"
}
],
"permissions": [
{
"user": "rabbit_user",
"vhost": "\/phoenix",
"configure": ".*",
"write": ".*",
"read": ".*"
}
],
"parameters": [],
"policies": [],
"exchanges": [],
"bindings": [],
"queues": [
{
"name": "high_prio",
"vhost": "\/phoenix",
"durable": true,
"auto_delete": false,
"arguments": {}
},
{
"name": "low_prio",
"vhost": "\/phoenix",
"durable": true,
"auto_delete": false,
"arguments": {}
}
]
}
docker-compose.yml
rabbitmq:
build:
context: ./rabbitmq
dockerfile: Dockerfile
# image: rabbitmq:3-management-alpine
ports:
- "15672:15672" # RabbitMQ management plugin
environment:
- RABBITMQ_DEFAULT_USER=rabbit_user
- RABBITMQ_DEFAULT_PASS=rabbit_user
- RABBITMQ_DEFAULT_VHOST=phoenix
expose:
- "5672" # Port exposed between docker containers
depends_on:
- db
- cache
celery_worker:
<<: *django
command: bash -c "celery -A phoenix.celery worker --loglevel=INFO -n worker1#%h"
environment:
- DJANGO_SECRET_KEY=${DJANGO_SECRET_KEY}
- EMAIL_HOST_PASSWORD=${EMAIL_HOST_PASSWORD}
- DJANGO_SETTINGS=${DJANGO_SETTINGS}
# HC the rabbit user. Not secure obvs, but OK for PoC.
- RABBITMQ_DEFAULT_USER=rabbit_user
- RABBITMQ_DEFAULT_PASS=rabbit_user
ports: []
links:
- rabbitmq
- cache
depends_on:
- db
- cache
- rabbitmq
settings.py
CELERY_BROKER_URL = "amqp://rabbit_user:rabbit_user#rabbitmq:5672/phoenix"
CELERY_BROKER_VHOST = "phoenix"
CELERY_RESULT_BACKEND = "django-db"
CELERY_CACHE_BACKEND = "default"
CELERY_TIME_ZONE = TIME_ZONE
I had it all working before when I just pulled the default rabbitMQ container in the docker-compose yaml file. Now I've created a specific Dockerfile for rabbitMQ, and setup rabbit_user and the vhost "phoenix". It all seems to be working - tasks are run, I see the message stats in the rabbit console, but I'm suffering these random "guest" login attempts. The word "guest" appears nowhere in my codebase, so somewhere RabbitMQ is using the default not "rabbit_user", but I can't see where.

Rather typical that I solve this by "fixing" something else ..
I noticed in my RMQ panel that the low_prio and high_prio queues had vhost "/phoenix", while the celery workers had vhost "phoenix" (I'd thought the RMQ config required the leading slash from my reading). I amended this so that all queues were allocated to "phoenix", and the mystery guest login disappeared.
I can only assume that since Celery was configured for the vhost "phoenix", that "/phoenix" was treated as s different vhost, with no users assigned to it, so RabbitMQ tried to use the "guest" default.
Not entirely sure why things were connecting to it - I'd sent nothing to those queues yet - but in case somebody else has this issue, this is what solved it for me.

Related

Cannot create a mongo database with docker

I'm having trouble creating a mongo database using the docker-compose command. Docker desktop tells me that everything is up and running including the db, but all I get is the standard 'admin, config, local' not the db I want to create. Here's my docker-compose.yaml
version: '3'
services:
app:
build: ./
entrypoint: ./.docker/entrypoint.sh
ports:
- 3000:3000
volumes:
- .:/home/node/app
depends_on:
- db
db:
image: mongo:4.4.4
restart: always
volumes:
- ./.docker/dbdata:/data/db
- ./.docker/mongo:/docker-entrypoint-initdb.d
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=root
- MONGO_INITDB_DATABASE=nest
mongo-express:
image: mongo-express
restart: always
ports:
- 8081:8081
environment:
- ME_CONFIG_MONGODB_SERVER=db
- ME_CONFIG_MONGODB_AUTH_USERNAME=root
- ME_CONFIG_MONGODB_AUTH_PASSWORD=root
- ME_CONFIG_MONGODB_ADMINUSERNAME=root
- ME_CONFIG_MONGODB_ADMINPASSWORD=root
depends_on:
- db
my init.js inside .docker/mongo
db.routes.insertMany([
{
_id: "1",
title: "Primeiro",
startPosition: {lat: -15.82594, lng: -47.92923},
endPosition: {lat: -15.82942, lng: -47.92765},
},
{
_id: "2",
title: "Segundo",
startPosition: {lat: -15.82449, lng: -47.92756},
endPosition: {lat: -15.82776, lng: -47.92621},
},
{
_id: "3",
title: "Terceiro",
startPosition: {lat: -15.82331, lng: -47.92588},
endPosition: {lat: -15.82758, lng: -47.92532},
}
]);
and my dockerfile
FROM node:14.18.1-alpine
RUN apk add --no-cache bash
RUN npm install -g #nestjs/cli
USER node
WORKDIR /home/node/app
and this is the 'error' log I get from docker when I run the nest container with mongodb, nest app and mongo express(there is actually a lot more but SO keeps thinking that it is spam for some reason.
about to fork child process, waiting until server is ready for connections.
Successfully added user: {
"user" : "root",
"roles" : [
{
"role" : "root",
"db" : "admin"
}
]
}
Error saving history file: FileOpenFailed Unable to open() file /home/mongodb/.dbshell: No such file or directory
{"t":{"$date":"2022-06-01T19:39:15.542+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn2","msg":"Connection ended","attr":{"remote":"127.0.0.1:39304","connectionId":2,"connectionCount":0}}
/usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/init.js
{"t":{"$date":"2022-06-01T19:39:15.683+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"127.0.0.1:39310","connectionId":3,"connectionCount":1}}
{"t":{"$date":"2022-06-01T19:39:15.684+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn3","msg":"client metadata","attr":{"remote":"127.0.0.1:39310","client":"conn3","doc":{"application":{"name":"MongoDB Shell"},"driver":{"name":"MongoDB Internal Client","version":"4.4.4"},"os":{"type":"Linux","name":"Ubuntu","architecture":"x86_64","version":"18.04"}}}}
{"t":{"$date":"2022-06-01T19:39:15.701+00:00"},"s":"I", "c":"STORAGE", "id":20320, "ctx":"conn3","msg":"createCollection","attr":{"namespace":"nest.routes","uuidDisposition":"generated","uuid":{"uuid":{"$uuid":"f689868e-af6d-4ec6-b555-dcf520f24788"}},"options":{}}}
{"t":{"$date":"2022-06-01T19:39:15.761+00:00"},"s":"I", "c":"INDEX", "id":20345, "ctx":"conn3","msg":"Index build: done building","attr":{"buildUUID":null,"namespace":"nest.routes","index":"_id_","commitTimestamp":{"$timestamp":{"t":0,"i":0}}}}
uncaught exception: ReferenceError: colection is not defined :
#/docker-entrypoint-initdb.d/init.js:23:1
failed to load: /docker-entrypoint-initdb.d/init.js
exiting with code -3
this is what running docker-compose ps shows
NAME COMMAND SERVICE STATUS PORTS
nest-api-app-1 "./.docker/entrypoin…" app running 0.0.0.0:3000->3000/tcp
nest-api-db-1 "docker-entrypoint.s…" db running 27017/tcp
nest-api-mongo-express-1 "tini -- /docker-ent…" mongo-express running 0.0.0.0:8081->8081/tcp
this what my docker desktop shows
The MongoDB container only creates a database if no database already exists. You probably already have one, which is why a new database isn't created and your initialization script isn't run.
Delete the contents of ./.docker/dbdata on the host. Then start the containers with docker-compose and Mongo should create your database for you.

Cannot configure a Mongo replicaSet from docker init script

I am trying to set up a 2 node replicaSet in docker for local development only. Single node already works fine, but there are keyfile issues when trying to add a member as part of the docker init script (NB I see the keyfile is set correctly from the logs). The same command works fine from a shell though, not via the init script.
Basically, the current config has worked fine for one node, but adding another gives the following error:
mongo_1 | {"t":{"$date":"2021-07-21T16:33:19.583+00:00"},"s":"W", "c":"REPL", "id":23724, "ctx":"ReplCoord-0","msg":"Got error response on heartbeat request","attr":{"hbStatus":{"code":13,"codeName":"Unauthorized","errmsg":"command replSetHeartbeat requires authentication"},"requestTarget":"mongo-secondary:27017","hbResp":{"ok":1.0}}}
mongo_1 | {"t":{"$date":"2021-07-21T16:33:19.583+00:00"},"s":"E", "c":"REPL", "id":21426, "ctx":"conn2","msg":"replSetInitiate failed","attr":{"error":{"code":74,"codeName":"NodeNotFound","errmsg":"replSetInitiate quorum check failed because not all proposed set members responded affirmatively: mongo-secondary:27017 failed with command replSetHeartbeat requires authentication"}}}
If I remove mongo-secondary from the set, then after startup use a shell to mongo to load the exact same config, everything works fine (they keyfile is used and the set is made with both members).
Currently my config is:
# docker-compose.yml
mongo: &MONGO
image: mongo:4.4
restart: unless-stopped
volumes:
- mongo_data:/data/db
- ./scripts/docker/mongo/001_mongo_init.js:/docker-entrypoint-initdb.d/001_mongo_init.js:ro
- ./scripts/docker/mongo/mongo-entrypoint.sh:/mongo-entrypoint
- ./conf/mongodb/mongod-config.yml:/etc/mongod.yml
entrypoint: sh /mongo-entrypoint
ports:
- 27017:27017
env_file:
- ./env/mongo.env
command: --auth --config /etc/mongod.yml
extra_hosts:
- mongo:127.0.0.1
mongo-secondary:
<<: *MONGO
volumes:
- mongo_secondary_data:/data/db
- ./scripts/docker/mongo/mongo-entrypoint.sh:/mongo-entrypoint
- ./conf/mongodb/mongod-config.yml:/etc/mongod.yml
ports:
- 27018:27017
extra_hosts:
- mongo-secondary:127.0.0.1
# mongo-entrypoint.sh
#!/bin/sh
set -eu
# Create the keyfile used for mongo replicaSet auth.
keyfile=/home/keyfile
echo "Creating replicaSet keyfile..."
echo "keyfile" > ${keyfile}
chmod 0400 $keyfile
chown mongodb $keyfile
echo "Created replicaSet keyfile."
# original entrypoint
exec docker-entrypoint.sh "$#"
// 001_mongo_init.js
function getEnv(envVar) {
const ret = run('sh', '-c', `printenv ${envVar} > /tmp/${envVar}.txt`);
if (ret !== 0) throw Error(`Value "${envVar}" is not present in the environment.`);
return cat(`/tmp/${envVar}.txt`).trim(); // NB cat leaves a \n at the end of text
}
// create replicaset
const rsconf = {
_id: getEnv('MONGODB_REPLICA_SET'),
members: [
{
_id: 0,
host: 'mongo:27017',
},
{
_id: 1,
host: 'mongo-secondary:27017',
priority: 0, // prevent from becoming master
},
],
};
rs.initiate(rsconf);
rs.conf();
// further code to create users etc.
# mongod-config.yml
---
security:
keyFile: /home/keyfile
replication:
replSetName: rs0
enableMajorityReadConcern: true

GKE service catalog BigQuery ACL/permission problems - The user xx does not have bigquery.jobs.create permission in project yy

I am trying to use the service catalog of Google Kubernetes to connect to BigQuery. I had however a lot of issues regarding IAM/ACL permissions.
I added the Owner role to the myProjectId#cloudservices.gserviceaccount.com account, since Editor was not enough to access IAM during the creation of a binding's service account.
After manually adding projectReaders, projectWriters and projectOwners to the ACL of the dataset, I could finally read and write to BigQuery, but I can not create jobs, since this requires project permissions. The command to update the dataset was
bq update --source /tmp/roles myDatasetId
After that I tried to query bq, but it failed with
root#batch-shell:/app# cat sql/xxx.sql | bq query --format=none --allow_large_results=true --destination_table=myDatasetId.pages_20180730 --maximum_billing_tier 3
BigQuery error in query operation: Access Denied: Project my-staging-project: The user k8s-bigquery-acc#my-staging-project.iam.gserviceaccount.com does not have
bigquery.jobs.create permission in project my-staging-project.
I tried to set the account's role to "Owner" and "BigQuery Job User" with no effect. I even tried all the other accounts as Owner.
This are my current ACL permissions:
[16:52:45] blackfalcon:~/src/myproject/batch :chris $ bq --format=prettyjson show myDatasetId
{
"access": [
{
"role": "WRITER",
"specialGroup": "projectWriters"
},
{
"role": "OWNER",
"specialGroup": "projectOwners"
},
{
"role": "OWNER",
"userByEmail": "myProjectId#cloudservices.gserviceaccount.com"
},
{
"role": "OWNER",
"userByEmail": "k8s-bigquery-acc#my-staging-project.iam.gserviceaccount.com"
},
{
"role": "READER",
"specialGroup": "allAuthenticatedUsers"
},
{
"role": "READER",
"specialGroup": "projectReaders"
}
],
"creationTime": "1532859638248",
"datasetReference": {
"datasetId": "myDatasetId",
"projectId": "my-staging-project"
},
"defaultTableExpirationMs": "8000000000",
"description": "myproject Access myDatasetId",
"id": "my-staging-project:myDatasetId",
"kind": "bigquery#dataset",
"lastModifiedTime": "1533184961736",
"location": "US",
"selfLink": "https://www.googleapis.com/bigquery/v2/projects/my-staging-project/datasets/myDatasetId"
}
[16:53:02] blackfalcon:~/src/myproject/batch :chris $ gcloud projects get-iam-policy my-staging-project
bindings:
- members:
- serviceAccount:k8s-bigquery-acc#my-staging-project.iam.gserviceaccount.com
- user:myemail#somedomain.com
role: roles/bigquery.admin
- members:
- serviceAccount:k8s-cloudsql-acc-staging#my-staging-project.iam.gserviceaccount.com
role: roles/cloudsql.client
- members:
- serviceAccount:service-myProjectId#compute-system.iam.gserviceaccount.com
role: roles/compute.serviceAgent
- members:
- serviceAccount:service-myProjectId#container-engine-robot.iam.gserviceaccount.com
role: roles/container.serviceAgent
- members:
- serviceAccount:myProjectId-compute#developer.gserviceaccount.com
- serviceAccount:myProjectId#cloudservices.gserviceaccount.com
- serviceAccount:service-myProjectId#containerregistry.iam.gserviceaccount.com
role: roles/editor
- members:
- serviceAccount:service-myProjectId#cloud-ml.google.com.iam.gserviceaccount.com
role: roles/ml.serviceAgent
- members:
- serviceAccount:myProjectId#cloudservices.gserviceaccount.com
- user:myemail#somedomain.com
role: roles/owner
- members:
- serviceAccount:scg-fv6fz3sjnxo3cfpppcl2qs5edm#my-staging-project.iam.gserviceaccount.com
role: roles/servicebroker.operator
- members:
- serviceAccount:service-myProjectId#gcp-sa-servicebroker.iam.gserviceaccount.com
role: roles/servicebroker.serviceAgent
- members:
- serviceAccount:k8s-bigquery-acc#my-staging-project.iam.gserviceaccount.com
- user:myemail#somedomain.com
role: roles/storage.admin
version: 1
It seems I need to set the projects ACL for BigQuery, but everything I found indicates, that setting the roles with IAM should be enough
Any help would be greatly appreciated.
UPDATE: I solved that for now.
Turns out that the service account itself was not working properly. I tried giving an Owner role to the service account and used the service account locally to access a few gcloud resources, all failed with permission errors.
I created then a new service account with the same permissions and tried again and it worked. So, it seems the service account was somehow broken.
I deleted the bindings, then the IAM and service account and rebuild the bindings.
Now it is working like a charm

ELK, File beat cut some text from message

I have ELK(filebeat->logstash->elasticsearch<-kibana) running on win10. I gave the following two lines, then I found filebeat not sending whole text, rather some head/front texts are cut.
2018-04-27 10:42:49 [http-nio-8088-exec-1] - INFO - app-info - injectip ip 192.168.16.89
2018-04-27 10:42:23 [RMI TCP Connection(10)-127.0.0.1] - INFO - org.apache.catalina.core.ContainerBase.[Tomcat].[localhost].[/] - Initializing Spring FrameworkServlet 'dispatcherServlet'
In filebeat console, I notice following text:
2018-05-24T09:02:50.361+0800 DEBUG [publish] pipeline/processor.go:275 Publish event: {
"#timestamp": "2018-05-24T01:02:50.361Z",
"#metadata": {
"beat": "filebeat",
"type": "doc",
"version": "6.2.3"
},
"source": "e:\\sjj\\xxx\\YKT\\ELK\\twoFormats.log",
"offset": 97083,
"message": "xec-1] - INFO - app-info - injectip ip 192.168.16.89",
"prospector": {
"type": "log"
},
"beat": {
"name": "DESKTOP-M4AFV3I",
"hostname": "DESKTOP-M4AFV3I",
"version": "6.2.3"
}
}
and
2018-05-24T09:11:10.374+0800 DEBUG [publish] pipeline/processor.go:275 Publish event: {
"#timestamp": "2018-05-24T01:11:10.373Z",
"#metadata": {
"beat": "filebeat",
"type": "doc",
"version": "6.2.3"
},
"prospector": {
"type": "log"
},
"beat": {
"version": "6.2.3",
"name": "DESKTOP-M4AFV3I",
"hostname": "DESKTOP-M4AFV3I"
},
"source": "e:\\sjj\\xxx\\YKT\\ELK\\twoFormats.log",
"offset": 97272,
"message": "n(10)-127.0.0.1] - INFO - org.apache.catalina.core.ContainerBase.[Tomcat].[localhost].[/] - Initializing Spring FrameworkServlet 'dispatcherServlet'"
}
In the console, one could see message part, some front text is cut off. In first case, '2018-04-27 10:42:49 [http-nio-8088-e' is cut, in the second case, '2018-04-27 10:42:23 [RMI TCP Connectio' is cut.
Why filebeat will do this? this makes my regex generates parserexception in logstash.
I list my filebeat.yml file as follows:
#=========================== Filebeat prospectors =============================
filebeat.prospectors:
# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.
- type: log
# Change to true to enable this prospector configuration.
#enabled: false
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
#- /var/log/*.log
- e:\sjj\xxx\YKT\ELK\twoFormats.log
### Multiline options
# Mutiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
multiline.pattern: '^[[:space:]]'
multiline.negate: false
multiline.match: after
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 3
#index.codec: best_compression
#_source.enabled: false
#----------------------------- Logstash output --------------------------------
output.logstash:
hosts: ["localhost:5044"]

How to skip role executing in Ansible

I try to write the playbook.yml for my vagrant machine and I'm faced with the following problem.
Ansible prompt me to set these variables and I set these variables to null/false/no/[just enter], but the roles is executed no matter! How can I prevent this behavior? I just want no actions if no vars are set..
---
- name: Deploy Webserver
hosts: webservers
vars_prompt:
run_common: "Run common tasks?"
run_wordpress: "Run Wordpress tasks?"
run_yii: "Run Yii tasks?"
run_mariadb: "Run MariaDB tasks?"
run_nginx: "Run Nginx tasks?"
run_php5: "Run PHP5 tasks?"
roles:
- { role: common, when: run_common is defined }
- { role: mariadb, when: run_mariadb is defined }
- { role: wordpress, when: run_wordpress is defined }
- { role: yii, when: run_yii is defined }
- { role: nginx, when: run_nginx is defined }
- { role: php5, when: run_php5 is defined }
I believe the variables will always be defined when you use vars_prompt, so "is defined" will always be true. What you probably want is something along these lines:
- name: Deploy Webserver
hosts: webservers
vars_prompt:
- name: run_common
prompt: "Product release version"
default: "Y"
roles:
- { role: common, when: run_common == "Y" }
Edit: To answer your question, no it does not throw an error. I made a slightly different version and tested it using ansible 1.4.4:
- name: Deploy Webserver
hosts: localohst
vars_prompt:
- name: run_common
prompt: "Product release version"
default: "N"
roles:
- { role: common, when: run_common == "Y" or run_common == "y" }
And roles/common/tasks/main.yml contains:
- local_action: debug msg="Debug Message"
If you run the above example and just hit Enter, accepting the default, then the role is skipped:
Product release version [N]:
PLAY [Deploy Webserver] *******************************************************
GATHERING FACTS ***************************************************************
ok: [localhost]
TASK: [common | debug msg="Debug Message"] ************************************
skipping: [localhost]
PLAY RECAP ********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0
But if you run this and enter Y or y when prompted then the role is executed as desired:
Product release version [N]:y
PLAY [Deploy Webserver] *******************************************************
GATHERING FACTS ***************************************************************
ok: [localhost]
TASK: [common | debug msg="Debug Message"] ************************************
ok: [localhost] => {
"item": "",
"msg": "Debug Message"
}
PLAY RECAP ********************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0