Using a separate testing mongo database on mongo and Heroku - mongodb

In my project I am using https://www.npmjs.com/package/dotenv-safe in order to declare environment variables needed for configuration. For example:
NODE_ENV=development
JWT_SECRET=xxxxxxx
JWT_EXPIRATION_MINUTES=15
MONGO_URI=mongodb://mongodb:27017/proddb
BASE_URI=http://localhost:3000/
MONGO_URI_TESTS=mongodb://mongodb:27017/testdb
PORT=3000
Then I use those files in a config file :
module.exports = {
env: process.env.NODE_ENV,
port: process.env.PORT,
jwtSecret: process.env.JWT_SECRET,
jwtExpirationInterval: process.env.JWT_EXPIRATION_MINUTES,
mongo: {
uri: process.env.NODE_ENV === 'test'
? process.env.MONGO_URI_TESTS
: process.env.MONGO_URI,
},
logs: process.env.NODE_ENV === 'production' ? 'combined' : 'dev',
};
and in my package.json file, I've got:
"scripts": {
"start": "NODE_ENV=production node ./src/index.js",
"dev": "LOG_LEVEL=debug nodemon --inspect=0.0.0.0 ./src/index.js",
"test": "NODE_ENV=test nyc --reporter=html --reporter=text mocha --timeout 20000 --recursive src/tests"
}
The problem? Everything works fine but when tests are run on Heroku (prod) , they run on the main database and not on the testdb...

Related

ERROR Configuring mongoDB using Ansible (MongoNetworkError: connect ECONNREFUSED)

I'm trying to configure a replicaset of mongodb using ansible,
I succeeded to install mongoDB on the primary server and created the replica-set configuration file except when I launch the playbook, I get an error of type: MongoNetworkError: connect ECONNREFUSED 3.142.150.62:28041
Does anyone have an idea please how to solve this?
attached, the playbook and the error on the Jenkins console
Playbook:
---
- name: Play1
hosts: hhe
#connection: local
become: true
#remote_user: ec2-user
#remote_user: root
tasks:
- name: Install gnupg
package:
name: gnupg
state: present
- name: Import the public key used by the package management system
shell: wget -qO - https://www.mongodb.org/static/pgp/server-5.0.asc | sudo apt-key add -
- name: Create a list file for MongoDB
shell: echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/5.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-5.0.list
- name: Reload local package database
command: sudo apt-get update
- name: Installation of mongodb-org
package:
name: mongodb-org
state: present
update_cache: yes
- name: Start mongodb
service:
name: mongod
state: started
enabled: yes
- name: Play2
hosts: hhe
become: true
tasks:
- name: create directories on all the EC2 instances
shell: mkdir -p replicaset/member
- name: Play3
hosts: secondary1
become: true
tasks:
- name: Start mongoDB with the following command on secondary1
shell: nohup mongod --port 28042 --bind_ip localhost,ec2-18-191-39-71.us-east-2.compute.amazonaws.com --replSet replica_demo --dbpath replicaset/member &
- name: Play4
hosts: secondary2
become: true
tasks:
- name: Start mongoDB with the following command on secondary2
shell: nohup mongod --port 28043 --bind_ip localhost,ec2-18-221-31-81.us-east-2.compute.amazonaws.com --replSet replica_demo --dbpath replicaset/member &
- name: Play5
hosts: arbiter
become: true
tasks:
- name: Start mongoDB with the following command on arbiter
shell: nohup mongod --port 27018 --bind_ip localhost,ec2-13-58-35-255.us-east-2.compute.amazonaws.com --replSet replica_demo --dbpath replicaset/member &
- name: Play6
hosts: primary
become: true
tasks:
- name: Start mongoDB with the following command on primary
shell: nohup mongod --port 28041 --bind_ip localhost,ec2-3-142-150-62.us-east-2.compute.amazonaws.com --replSet replica_demo --dbpath replicaset/member &
- name: Create replicaset initialize file
copy:
dest: /tmp/replicaset_conf.js
mode: "u=rw,g=r,o=rwx"
content: |
var cfg =
{
"_id" : "replica_demo",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "3.142.150.62:28041"
},
{
"_id" : 1,
"host" : "18.191.39.71:28042"
},
{
"_id" : 2,
"host" : "18.221.31.81:28043"
}
]
}
rs.initiate(cfg)
- name: Pause for a while
pause: seconds=20
- name: Initialize the replicaset
shell: mongo /tmp/replicaset_conf.js
The error on Jenkins Consol:
PLAY [Play6] *******************************************************************
TASK [Gathering Facts] *********************************************************
ok: [primary]
TASK [Start mongoDB with the following command on primary] *********************
changed: [primary]
TASK [Create replicaset initialize file] ***************************************
ok: [primary]
TASK [Pause for a while] *******************************************************
Pausing for 20 seconds
(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort)
ok: [primary]
TASK [Initialize the replicaset] ***********************************************
fatal: [primary]: FAILED! => {"changed": true, "cmd": "/usr/bin/mongo 3.142.150.62:28041 /tmp/replicaset_conf.js", "delta": "0:00:00.146406", "end": "2022-08-11 09:46:07.195269", "msg": "non-zero return code", "rc": 1, "start": "2022-08-11 09:46:07.048863", "stderr": "", "stderr_lines": [], "stdout": "MongoDB shell version v5.0.10\nconnecting to: mongodb://3.142.150.62:28041/test?compressors=disabled&gssapiServiceName=mongodb\nError: couldn't connect to server 3.142.150.62:28041, connection attempt failed: SocketException: Error connecting to 3.142.150.62:28041 :: caused by :: Connection refused :\nconnect#src/mongo/shell/mongo.js:372:17\n#(connect):2:6\nexception: connect failed\nexiting with code 1", "stdout_lines": ["MongoDB shell version v5.0.10", "connecting to: mongodb://3.142.150.62:28041/test?compressors=disabled&gssapiServiceName=mongodb", "Error: couldn't connect to server 3.142.150.62:28041, connection attempt failed: SocketException: Error connecting to 3.142.150.62:28041 :: caused by :: Connection refused :", "connect#src/mongo/shell/mongo.js:372:17", "#(connect):2:6", "exception: connect failed", "exiting with code 1"]}
You start the service already with
service:
name: mongod
state: started
enabled: yes
thus shell: nohup mongod ... & is pointless. You cannot start the mongod service multiple times, unless you use different port and dbPath. You should prefer to start the mongod as service, i.e. systemctl start mongod or similar instead of nohup mongod ... &. I prefer to use the configuration file (typically /etc/mongod.conf) rather than command line options.
Plain mongo command uses the default port 27017, i.e. it does not connect to the MongoDB instances you started in above task.
You should wait till replica set is initated. You can do it like this:
content: |
var cfg =
{
"_id" : "replica_demo",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "3.142.150.62:28041"
},
{
"_id" : 1,
"host" : "18.191.39.71:28042"
},
{
"_id" : 2,
"host" : "18.221.31.81:28043"
}
]
}
rs.initiate(cfg)
while (! db.hello().isWritablePrimary ) { sleep(1000) }
You configured an ARBITER. However, an arbiter node is useful only with an even number of Replica Set members. With 3 members it does not make much sense. Anyway, you don't add the arbiter to your Replica Set, so what is the reason to define it?
Just a note, you don't have to create a temp file, you can execute script directly, e.g. similar to this:
shell:
cmd: mongo --eval '{{ script }}'
executable: /bin/bash
vars:
script: |
var cfg =
{
"_id" : "replica_demo",
...
}
rs.initiate(cfg)
while (! db.hello().isWritablePrimary ) { sleep(1000) }
print(rs.status().ok)
register: ret
failed_when: ret.stdout_lines | last != "1"
Be aware of correct quoting.

Mongo database disconnects when using docker

I am using docker compose with my app and are trying to connect mongodb to the server. When i run my app locally outside of docker i get this as output(works as intended)
[nodemon] 2.0.15
[nodemon] to restart at any time, enter `rs`
[nodemon] watching path(s): *.*
[nodemon] watching extensions: js,mjs,json
[nodemon] starting `node index.js`
Server running
Mongoose connected to db...
Mongodb connected....
When i run the docker-compose up command and the server runs in the container i get this output
[nodemon] 2.0.15
docker-server | [nodemon] to restart at any time, enter `rs`
docker-server | [nodemon] watching path(s): *.*
docker-server | [nodemon] watching extensions: js,mjs,json
docker-server | [nodemon] starting `node index.js`
docker-server | Works
docker-server | Works
docker-server | Mongoose connection is disconnected...
After a while the mongoose disconnects.
My package.json is
{
"name": "make-me-a-sandwich",
"version": "1.1.0",
"description": "This is the Swagger 2.0 API for Web Architectures course group project work. ",
"main": "index.js",
"scripts": {
"prestart": "npm install",
"start": "nodemon index.js"
},
"keywords": [
"swagger"
],
"license": "Unlicense",
"private": true,
"dependencies": {
"connect": "^3.2.0",
"js-yaml": "^3.3.0",
"swagger-tools": "0.10.1",
"mongoose": "^6.1.5",
"nodemon": "^2.0.15"
}
}
My index.js file is
const http = require('http');
const connect = require('./models/db');
const PORT = 80;
const server = http.createServer(function (request, response) {
const { url, method, headers } = request;
const filePath = new URL(url, `http://${headers.host}`).pathname;
if (filePath === '/' && method.toUpperCase() === 'GET') {
console.log("Works")
response.statusCode = 200;
response.setHeader('Content-Type', 'text/plain');
response.end('Hello, World! GET\n');
} else {
response.statusCode = 200;
response.setHeader('Content-Type', 'text/plain');
response.end('Hello, World! Teemu\n');
}
});
server.on('error', err => {
console.error(err);
server.close();
});
server.on('close', () => console.log('Server closed.'));
server.listen(PORT, () => {
console.log("Server running");
});
connect.connectDB();
model/db.js
const mongoose = require('mongoose');
function connectDB() {
mongoose
.connect('mongodb://mongo_db:27017', {
useNewUrlParser: true,
})
.then(() => {
console.log('Mongodb connected....');
})
.catch(err => console.log(err.message));
mongoose.connection.on('connected', () => {
console.log('Mongoose connected to db...');
});
mongoose.connection.on('error', err => {
console.log(err.message);
});
mongoose.connection.on('disconnected', () => {
console.log('Mongoose connection is disconnected...');
});
};
function disconnectDB() {
mongoose.disconnect();
}
module.exports = { connectDB, disconnectDB };
Dockerfile
FROM node:17.3.0
WORKDIR /server
COPY package.json .
RUN npm install
COPY . .
EXPOSE 80
CMD ["npm", "start"]
docker-compose file
version: "3"
services:
server-a:
container_name: docker-server
build:
dockerfile: Dockerfile
context: ./backend/server-a
ports:
- "3000:80"
links:
- mongo_db
networks:
- backend
mongo_db:
container_name: mongo
image: mongo:latest
ports:
- '27017:27017'
networks:
backend:
Help would be really appreciated. Let me also know if i can offer any other information.
Different containers need to be on the same Compose network to communicate. If a service doesn't have a networks: block, Compose automatically attaches it to a default network. So in your example, the server-a container is only on the backend network, but the mongo_db container is only on the default network, and that's why they can't communicate.
The easiest way to resolve this is to delete all of the networks: blocks in the file. Then Compose will attach all of the containers to the default network. Removing other unnecessary options, you could reduce this Compose file to just
version: "3.8"
services:
server-a:
build: ./backend/server-a
ports:
- "3000:80"
mongo_db:
image: mongo:latest
ports:
- '27017:27017'
In a comment you suggest that it's important to keep a second named network. If that's the case, then you need to make sure the database container also has a networks: block that names a network in common with the application container.

How can cypress be made to work with aurelia with github actions and locally?

Ok, so I added cypress to aurelia during my configuration and it worked fine. When I went to set up cypress on github as just a command, I could not get it to recognize puppeteer as a browser. So instead I went and used the official github actions for cypress, and that works
- name: test
uses: cypress-io/github-action#v1
with:
start: yarn start
browser: ${{matrix.browser}}
record: true
env:
# pass the Dashboard record key as an environment variable
CYPRESS_RECORD_KEY: ${{ secrets.CYPRESS_RECORD_KEY }}
however I had to set my cypress.json as follows
{
"baseUrl": "http://localhost:8080",
"fixturesFolder": "test/e2e/fixtures",
"integrationFolder": "test/e2e/integration",
"pluginsFile": "test/e2e/plugins/index.js",
"screenshotsFolder": "test/e2e/screenshots",
"supportFile": "test/e2e/support/index.js",
"videosFolder": "test/e2e/videos",
"projectId": "..."
}
and now running yarn e2e doesn't work because there's no server stood up, as it's not doing it itself anymore via cypress.config.js
const CLIOptions = require( 'aurelia-cli').CLIOptions;
const aureliaConfig = require('./aurelia_project/aurelia.json');
const PORT = CLIOptions.getFlagValue('port') || aureliaConfig.platform.port;
const HOST = CLIOptions.getFlagValue('host') || aureliaConfig.platform.host;
module.exports = {
config: {
baseUrl: `http://${HOST}:${PORT}`,
fixturesFolder: 'test/e2e/fixtures',
integrationFolder: 'test/e2e/integration',
pluginsFile: 'test/e2e/plugins/index.js',
screenshotsFolder: 'test/e2e/screenshots',
supportFile: 'test/e2e/support/index.js',
videosFolder: 'test/e2e/videos'
}
};
how can I make it so that yarn e2e works as it previously did, and have it working on github?(I don't care which side of the equation is changed)
here's yarn e2e not sure what the au is doing under the hood.
"e2e": "au cypress",
Easiest way to achieve this, create a test/e2e/cypress-config.json
{
"baseUrl": "http://localhost:8080",
"fixturesFolder": "test/e2e/fixtures",
"integrationFolder": "test/e2e/integration",
"pluginsFile": "test/e2e/plugins/index.js",
"screenshotsFolder": "test/e2e/screenshots",
"supportFile": "test/e2e/support/index.js",
"videosFolder": "test/e2e/videos",
"projectId": "1234"
}
, and then setup the github action like this.
- name: test
uses: cypress-io/github-action#v1
with:
config-file: tests/e2e/cypress-config.json
start: yarn start
browser: ${{matrix.browser}}
record: true
env:
# pass the Dashboard record key as an environment variable
CYPRESS_RECORD_KEY: ${{ secrets.CYPRESS_RECORD_KEY }}
the path doesn't matter, just that you configure the same one. Just make sure it doesn't overlap with what aurelia wants.

How to find my database host address for CI / CD?

I'm going to conduct CI / CD in gitlab in AWS. My website contains a Postgresql database.
Question 1: But I don't know how to find the db host. Some say the host name is localhost, but I doubt it because I've deployed my website to AWS. Should it be elastic IP?
My .gitlab-ci.yml file is as follows:
image: node:latest
stages:
- testing
variables:
POSTGRES_DB: firstdb
POSTGRES_USER: johndoe
POSTGRES_PASSWORD: 1234
POSTGRES_HOST: //I don't know
testing:
services:
- postgres:latest
before_script:
- npm install -g yarn
- yarn install
- yarn knex migrate:latest --env testing
stage: testing
script:
- yarn jest
Question 2: Also, should I change the database config of development, testing and production of knex.ts accordingly, so that it aligns with .gitlab-ci.yml?
My knex file is as follows:
import * as dotenv from 'dotenv';
dotenv.config();
module.exports = {
development: {
client: 'postgresql',
connection: {
database: process.env.DB_NAME , //should I type actual data?
user: process.env.DB_USERNAME ,
password: process.env.DB_PASSWORD
},
pool: {
min: 2,
max: 10
},
migrations: {
tableName: 'knex_migrations'
}
},
testing:{
client: 'postgresql',
connection: {
host: process.env.POSTGRES_HOST ,//should I type actual data?
database: process.env.POSTGRES_DB ,
user: process.env.POSTGRES_USER ,
password: process.env.POSTGRES_PASSWORD
},
pool: {
min: 2,
max: 10
},
migrations: {
tableName: 'knex_migrations'
}
},
production: {
client: "postgresql",
connection: {
database: process.env.DB_NAME ,//should I type actual data?
user: process.env.DB_USERNAME ,
password: process.env.DB_PASSWORD
},
pool: {
min: 2,
max: 10
},
migrations: {
tableName: "knex_migrations"
}
}
};
Many thanks in advance. :)
The hostname is postgres. the hostname is derived from the image name.
explained here:
https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#how-services-are-linked-to-the-job

MongoError: failed to connect to server [mongo:27017] on first connect

Docker config:
FROM node:8
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD [ "npm", "start" ]
Docker compose:
version: "2"
services:
web:
build: .
depends_on:
- mongo
mongo:
image: mongo
ports:
- "27017:27017"
package.json:
{
"name": "docker_web_app",
"version": "1.0.0",
"description": "",
"author": "Stepan Yakovenko <stiv.yakovenko#gmail.com>",
"main": "server.js",
"scripts": {
"start": "node server.js"
},
"dependencies": {"express": "^4.16.1","mongodb": "~3.0.1","monk": "~6.0.5" }
}
server.js:
var mongo = require('mongodb');
var monk = require('monk');
var db = monk('mongo:27017/nodetest1');
db.then(function(){console.log("hello"});
Most of the time it works, but if I purge docker cache, usually it doesn't work and gives me this:
web_1 | (node:15) UnhandledPromiseRejectionWarning: MongoError: failed to connect to server [mongo:27017] on first connect [MongoErr
or: connect ECONNREFUSED 172.18.0.2:27017]
The root cause I think that docker's depends_on doesn't guarantee me that mongo has started listening, because in this case I get listening message from mongo after this error. How can I fix this? Does docker has any fix for this situation? Or how can I ask mongo to try connecting forever?
Thanx
This is sample reconnect code, which worked for me:
var connect = function () {
var db = monk('mongo:27017/nodetest1');
db.then(function () {
console.log("connected");
}).catch(function () {
// sometimes node starts before mongo, so we have to reconnect in case of error
connect();
});
};
connect();