Connect to Mongo (hosted on VM) from local machine using SpringBoot - mongodb

I have setup MongoDB on a VM. Now I need connect to Mongo (hosted on VM) from local machine using SpringBoot. What do I change in application.propeties file given that I have that VMs' username, password and IP address.
How do I set up Mongo in VM for the below InitDatabase class
import org.springframework.boot.CommandLineRunner;
import org.springframework.context.annotation.Bean;
import org.springframework.data.mongodb.core.MongoOperations;
import org.springframework.stereotype.Component;
#Component
public class InitDatabase {
#Bean
CommandLineRunner init(MongoOperations operations) {
return args -> {
operations.dropCollection(Image.class);
operations.insert(new Image("1",
"learning-spring-boot-cover.jpg"));
operations.insert(new Image("2",
"learning-spring-boot-2nd-edition-cover.jpg"));
operations.insert(new Image("3",
"bazinga.png"));
operations.findAll(Image.class).forEach(image -> {
System.out.println(image.toString());
});
};
}
}

You should override spring.mongodb configuration properties :
MongoDB config
spring.data.mongodb.authentication-database= *which_authentication_database_you_want_to_connect*
spring.data.mongodb.username=*database_username*
spring.data.mongodb.password=*database_password*
spring.data.mongodb.database=*which_database_you_want_to_connect*
spring.data.mongodb.port=*running_mongo_instance_port*
spring.data.mongodb.host=*running_mongo_instance_host -- you should write your own VM's ip address*

Related

Flask Rest Api SQL Alchemy connection Cloud Sql Postgresq

I have a connection problem with Cloud Sql Postgres from my Flask Rest API app.
I have a db.py file:
import os
from flask_sqlalchemy import SQLAlchemy
import sqlalchemy
db = SQLAlchemy()
def connect_unix_socket() -> sqlalchemy.engine.base.Engine:
""" Initializes a Unix socket connection pool for a Cloud SQL instance of Postgres. """
# Note: Saving credentials in environment variables is convenient, but not
# secure - consider a more secure solution such as
# Cloud Secret Manager (https://cloud.google.com/secret-manager) to help
# keep secrets safe.
db_user = os.environ["DB_USER"] # e.g. 'my-database-user'
db_pass = os.environ["DB_PASS"] # e.g. 'my-database-password'
db_name = os.environ["DB_NAME"] # e.g. 'my-database'
unix_socket_path = os.environ["INSTANCE_UNIX_SOCKET"] # e.g. '/cloudsql/project:region:instance'
pool = sqlalchemy.create_engine(
# Equivalent URL:
# postgresql+pg8000://<db_user>:<db_pass>#/<db_name>
# ?unix_sock=<INSTANCE_UNIX_SOCKET>/.s.PGSQL.5432
# Note: Some drivers require the `unix_sock` query parameter to use a different key.
# For example, 'psycopg2' uses the path set to `host` in order to connect successfully.
sqlalchemy.engine.url.URL.create(
drivername="postgresql+pg8000",
username=db_user,
password=db_pass,
database=db_name,
query={"unix_sock": "{}/.s.PGSQL.5432".format(unix_socket_path)},
),
# [START_EXCLUDE]
# Pool size is the maximum number of permanent connections to keep.
pool_size=5,
# Temporarily exceeds the set pool_size if no connections are available.
max_overflow=2,
# The total number of concurrent connections for your application will be
# a total of pool_size and max_overflow.
# 'pool_timeout' is the maximum number of seconds to wait when retrieving a
# new connection from the pool. After the specified amount of time, an
# exception will be thrown.
pool_timeout=30, # 30 seconds
# 'pool_recycle' is the maximum number of seconds a connection can persist.
# Connections that live longer than the specified amount of time will be
# re-established
pool_recycle=1800, # 30 minutes
# [END_EXCLUDE]
)
return pool
I import the db.py file in my app.py file:
import os
import sqlalchemy
from flask import Flask
from flask_smorest import Api
from flask_sqlalchemy import SQLAlchemy
from db import db, connect_unix_socket
import models
from resources.user import blp as UserBlueprint
# pylint: disable=C0103
app = Flask(__name__)
def init_connection_pool() -> sqlalchemy.engine.base.Engine:
# use a Unix socket when INSTANCE_UNIX_SOCKET (e.g. /cloudsql/project:region:instance) is defined
if unix_socket_path:
return connect_unix_socket()
raise ValueError(
"Missing database connection type. Please define one of INSTANCE_HOST, INSTANCE_UNIX_SOCKET, or INSTANCE_CONNECTION_NAME"
)
db = None
#app.before_first_request
def init_db() -> sqlalchemy.engine.base.Engine:
global db
db = init_connection_pool()
api = Api(app)
#app.route("/api")
def user_route():
return "Welcome user API!"
api.register_blueprint(UserBlueprint)
if __name__ == '__main__':
server_port = os.environ.get('PORT', '8080')
app.run(debug=True, port=server_port, host='0.0.0.0')
The app run correctly, when i call the end point to Get or Post users, the app crash and give me this error:
"The current Flask app is not registered with this 'SQLAlchemy'"
RuntimeError: The current Flask app is not registered with this 'SQLAlchemy' instance. Did you forget to call 'init_app', or did you create multiple 'SQLAlchemy' instances?
This is my User.py class:
from sqlalchemy.exc import SQLAlchemyError, IntegrityError
from db import db
from models import UserModel
from schemas import UserSchema
blp = Blueprint("Users", "users", description="Operations on users")
#blp.route("/user/<string:user_id>")
class User(MethodView):
#blp.response(200, UserSchema)
def get(self, user_id):
user = UserModel.query.get_or_404(user_id)
return user
def delete(self, user_id):
user = UserModel.query.get_or_404(user_id)
db.session.delete(user)
db.session.commit()
return {"message": "User deleted"}, 200
#blp.route("/user")
class UserList(MethodView):
#blp.response(200, UserSchema(many=True))
def get(self):
return UserModel.query.all()
How i can fix this issue?
#dev_ Your issue is that your are trying to intermingle the use of SQLAlchemy Core with SQLAlchemy ORM as if they are the same thing, leading to your issues. SQLAlchemy connection pools created using sqlalchemy.create_engine use the CORE API while Flask-SQLAlchemy uses the SQLAlchemy ORM model. This is the core reason for you issue. It is easier to use one or the other.
I would recommend using purely Flask-SQLALchemy with the use of the cloud-sql-python-connector library for your use-case. It will make your life much easier.
For simplicity, I am getting rid of your db.py leading to your app.py file being as follows:
from flask import Flask
from flask_smorest import Api
from flask_sqlalchemy import SQLAlchemy
from google.cloud.sql.connector import Connector, IPTypes
from resources.user import blp as UserBlueprint
# load env vars
db_user = os.environ["DB_USER"] # e.g. 'my-database-user'
db_pass = os.environ["DB_PASS"] # e.g. 'my-database-password'
db_name = os.environ["DB_NAME"] # e.g. 'my-database'
instance_connection_name = os.environ["INSTANCE_CONNECTION_NAME"] # e.g. 'project:region:instance'
# Python Connector database connection function
def getconn():
with Connector() as connector:
conn = connector.connect(
instance_connection_name, # Cloud SQL Instance Connection Name
"pg8000",
user=db_user,
password=db_pass,
db=db_name,
ip_type= IPTypes.PUBLIC # IPTypes.PRIVATE for private IP
)
return conn
app = Flask(__name__)
# configure Flask-SQLAlchemy to use Python Connector
app.config['SQLALCHEMY_DATABASE_URI'] = "postgresql+pg8000://"
app.config['SQLALCHEMY_ENGINE_OPTIONS'] = {
"creator": getconn
}
# initialize db (using app!)
db = SQLAlchemy(app)
# rest of your code
api = Api(app)
# ...
Hope this helps resolve your issue!

control chalice IP connection to postgres

I built a small chalice app that is connected to Postgres that does some inserts. In the pg_hba.conf file (the database is on another server) I have allowed only certain IPs to connect. Almost every request from lambda uses a different IP.
this is my chalice app
import psycopg2.extras
from psycopg2.extras import execute_values
from chalice import Chalice, Response
app = Chalice(app_name='hello_world')
app.debug = True
conn = psycopg2.connect(user='user',
password='Password123',
host='123.12.12.123',
port=5432,
database='test_db')
cursor = conn.cursor(cursor_factory=psycopg2.extras.DictCursor)
#app.route("/")
def main_page():
cursor.execute("SELECT COUNT(*) FROM main WHERE status=1")
g = dict(cursor.fetchone())
return {"count": g['count']}
it works when I deploy local on 127.0.0.1 , is there a way to manage lambda IP when connecting to the database?
I am open to any suggestions
Create your vpc, private subnet, public subnets, security groups, etc.
Note: This is the challenging part.
Tutorial: https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html
Then copy the security-groups-id and subnets to chalice config: .chalice/config.json
{
"version": "2.0",
"app_name": "XYZ",
"stages": {
"prod": {
"security_group_ids": [
"sg-YYYYYYYY"
],
"subnet_ids": [
"subnet-XXXXXXXX"
]
}
}
}

Unable to connect to postgres using deno.js

Unable to connect to postgres in deno.js.
Here is the configuration:
const dbCreds = {
applicationName: "appname",
user: "user_sfhjwre",
database: "d9iu8mve7nen",
password: "68790f31eelkhlashdlkagsvADSDa52f9d8faed894c037ef6f9c9f09885603",
hostname: "ec2-345-34-97-212.eu-east-1.xx.amazonaws.com",
port: 5432,
};
export { dbCreds };
Usage:
import { Client } from "https://deno.land/x/postgres/mod.ts";
import { dbCreds } from "../config.ts";
const client = new Client(dbCreds);
await client.connect();
Also tried:
config = "postgres://user#localhost:5432/test?application_name=my_custom_app";
const client = new Client(config);
await client.connect();
Same result:
Uncaught Error: Unknown auth message code 1397113172
Is there anything wrong with the syntax, I can connect to the same db using prisma.
I have the PostgreSQL server in a remote server and, each time my public IP changes, I need to change the pg_hba.conf in order to set my new public IP as authorized for remote access.
Hope this helps.
Best regards.

NestJS: Resolving dependencies in NestMiddleware?

I'm trying to combine express-session with a typeorm storage within NestJS framework. Therefore I wrote a NestMiddleware as wrapper for express-session (see below). When I start node, NestJS logs the following
Error message:
UnhandledPromiseRejectionWarning: Unhandled promise rejection
(rejection id: 1): Error: Nest can't resolve dependencies of the
SessionMiddleware (?). Please verify whether [0] argument is available
in the current context.
Express is not started (connection refused), but the Sqlite DB (where the sessions should be stored) is created (also a session table, but not the columns).
To me it looks like there is a special problem in resolving dependencies with #InjectRepository from nestjs/typorm-Module. Does anyone have a hint?
Code:
import { Middleware, NestMiddleware, ExpressMiddleware } from '#nestjs/common';
import * as expressSession from 'express-session';
import { InjectRepository } from '#nestjs/typeorm';
import { Repository } from 'typeorm';
import { TypeormStore } from 'connect-typeorm';
import { Session } from './session.entity';
#Middleware()
export class SessionMiddleware implements NestMiddleware {
constructor(
#InjectRepository(Session)
private readonly sessionRepository: Repository<Session>
) {}
resolve(): ExpressMiddleware {
return expressSession({
store: new TypeormStore({ ttl: 86400 }).connect(this.sessionRepository),
secret: 'secret'
});
}
}
It was my fault. Had the middleware in a module, but was configuring the session middleware at the app module level. On that level the following import statement was missing:
TypeOrmModule.forFeature([Session])
Moved now everything to non-app module including middleware configuration. That solved the problem.

Scala spray failing to bind to EC2 Public DNS

I just stopped running a Scala Spray executable on an EC2 Ubuntu instance to launch a newer version of the app. When I try and run the new executable I get the following error:
ubuntu#ip-172-32-92:~/suredbits-dfs$ ./target/universal/stage/bin/suredbits-dfs
[WARN] [08/14/2015 03:22:30.314] [NflDbApiActorSystemConfig-akka.actor.default-dispatcher-5] [akka://NflDbApiActorSystemConfig/user/IO-HTTP/listener-0] Bind to ec2-52-116-195.us-west-2.compute.amazonaws.com/172.32.92:80 failed
I have checked to make sure port 80 is open and available by running this command:
netstat -anp | grep 80
which doesn't return anything. It seems that my port is open, and Scala Spray is just failing to bind again. Here is how I am attempting to start my server in my executable:
package com.suredbits.dfs
/**
* Created by chris on 8/9/15.
*/
import akka.actor.ActorSystem
import com.github.nfldb.config.{NflDbApiActorSystemConfig, NflDbApiDbConfig}
import com.suredbits.dfs.nfl.scoring.NflPlayerScoringService
import spray.routing.SimpleRoutingApp
object Main extends App with SimpleRoutingApp with NflPlayerScoringService
with NflDbApiDbConfig with NflDbApiActorSystemConfig {
import actorSystem._
/* startServer(interface = "localhost", port = 80) {
path("hello") {
get {
complete {
<h1>Say hello to spray</h1>
}
}
} ~ nflPlayerScoringServiceRoutes
}*/
startServer(interface = "ec2-52-116-195.us-west-2.compute.amazonaws.com", port = 80) {
path("hello") {
get {
complete {
<h1>Say hello to spray</h1>
}
}
} ~ nflPlayerScoringServiceRoutes
}
}
Start the process with sudo:
sudo ./target/universal/stage/bin/suredbits-dfs
Why? You are trying to use a privileged port (a port under 1024). Only root has access to those ports. The clue that tipped me off to this was that you checked netstat and nothing else was on port 80.