Scala-Redis: How to connect a global redis cache in Scala? - scala

I am using scala-redis to make a connection to global redis database. I am able to connect easily with the locally running redis server but not sure what to pass in the constructor to connect with the global database. Tried passing username, password and port number but didn't work out.
I am using memcached protocol.
What do I mean by global redis-cache?
I have configured a cache on https://app.redislabs.com/
I am using these credentials:

Could you please share some details?
What is meant by Global Redis Cache? Is that you are trying to access it in AWS?
If in AWS then you have to create a script (Cloud Formation - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/AWS_ElastiCache.html) that creates the connection with AWS Redis Cache. Then provide the following config in application-prod.conf.
//NOTE: ELASTICACHE_ENDPOINT is your AWS REDIS endpoint
play.cache.redis {
bind-default: false
default-cache: "redis"
source: standalone
dispatcher: contexts.blockingCacheDispatcher
instances {
cache-name-1 {
connection-string: ${?ELASTICACHE_ENDPOINT}
source: connection-string
database: 1
},
cache-name-2 {
connection-string: ${?ELASTICACHE_ENDPOINT}
source: connection-string
database: 1
}
}
}

Related

Prisma 1 + MongoDB Atlas deploy to Heroku returns error 404

I've deployed a Prisma 1 GraphQL server app on Heroku, connected to a MongoDB Atlas cluster.
Running prisma deploy locally with the default endpoint http://localhost:4466, the action being run successfully and all the schemas are being generated correctly.
But, if I change the endpoint with the Heroku remote host https://<myapp>.herokuapp.com, prisma deploy fails, returning this exception:
ERROR: GraphQL Error (Code: 404)
{
"error": "\n<html lang="en">\n\n<meta charset="utf-8">\nError\n\n\nCannot POST /management\n\n\n",
"status": 404
}
I think that's could be related to an authentication problem, but I'm getting confused because I've defined both security token in prisma.yml than the management API secret key in docker-compose.yml.
Here's my current configs if it could be helpful:
prisma.yml
# The HTTP endpoint for your Prisma API
# Tried with https://<myapp>.herokuapp.com only too with the same result
endpoint: https://<myapp>.herokuapp.com/dinai/staging
secret: ${env:PRISMA_SERVICE_SECRET}
# Points to the file that contains your datamodel
datamodel: datamodel.prisma
databaseType: document
# Specifies language & location for the generated Prisma client
generate:
- generator: javascript-client
output: ../src/generated/prisma-client
# Ensures Prisma client is re-generated after a datamodel change.
hooks:
post-deploy:
- prisma generate
docker-compose.yml
version: '3'
services:
prisma:
image: prismagraphql/prisma:1.34
restart: always
ports:
- "4466:4466"
environment:
PRISMA_CONFIG: |
port: 4466
# uncomment the next line and provide the env var PRISMA_MANAGEMENT_API_SECRET=my-secret to activate cluster security
managementApiSecret: ${PRISMA_MANAGEMENT_API_SECRET}
databases:
default:
connector: mongo
uri: mongodb+srv://${MONGO_DB_USER}:${MONGO_DB_PASSWORD}#${MONGO_DB_CLUSTER}/myapp?retryWrites=true&w=majority
database: myapp
Plus, a weird situation happens too. In both cases, if I try to navigate the resulting API with GraphQL Playground, clicking on the tab "Schema" returns an error. On the other side, the tab "Docs" is being populated correctly. Apparently, seems that the exception is blocking the script finishing to generate the rest of the schemas.
A little help by someone experienced with Prisma/Heroku would be awesome.
Thanks in advance.
To date, I still do not clear what was causing the exception in detail. But looking deeply on Prisma docs, I discovered that in version 1 there's the necessity to proxy the app through the Prisma Cloud.
So probably, deploying straight on Heroku without it, was generating the main issue: basically there wasn't any Prisma container service running on the server.
What I did is to follow step by step the official doc about how to deploy your server on Prisma Cloud (here's the video version). As in the example shown in the guide, I already have my own project, which is actually splitted in two different apps: respectively one for the client (front-end) and one for the API (back-end). So, instead to generate a new one, I pointed the back-end API endpoint to the remote URL of the Prisma server generated by the cloud (the Heroku container created by following the tutorial). Then, leaving the management secret API key only on the Prisma server container configuration (which has been generated automatically by the cloud) and, on the other hand, the service secret only on the back-end app, finally I was able to run the prisma deploy correctly and run my project remotely.

How to connect Vertx RedisClient in cluster mode with Elasticache

I am using Vertx Redis client from the package io.vertx.rxjava.redis.RedisClient to connect to Elasticache Redis.
It does connect but shows an error,
io.vertx.redis.client.impl.types.ErrorType: MOVED 4985 xxx.xxx.xxx.xxx:63791
After reading about the error I found its because there are sharding and its not able to connect to all of them.
From the library, I am not able to figure what method to use to connect in cluster mode.
Here is an example how to connect and send get command in cluster mode.
Define options:
final RedisOptions options = new RedisOptions()
.setType(RedisClientType.CLUSTER)
.setUseSlave(RedisSlaves.SHARE)
.setMaxWaitingHandlers(128 * 1024)
.addEndpoint("redis://127.0.0.1:7000")
.addEndpoint("redis://127.0.0.1:7001")
.addEndpoint("redis://127.0.0.1:7002")
.addEndpoint("redis://127.0.0.1:7003")
.addEndpoint("redis://127.0.0.1:7004")
.addEndpoint("redis://127.0.0.1:7005");
Connect and send command:
Redis.createClient(vertx, options).connect(onCreate -> {
final Redis cluster = onCreate.result();
cluster.send(cmd(SET).arg("key"), set -> {
System.out.println(set.result());
});
});
Tip: If you are unsure how use some library or documentation is not clear enough you can always checkout Tests if that projects has them. You can check how they are implemented so you can use examples from there.

Retrieve auto scaling group instance ip's and provide it to ansible

Im currently developing terraform script and ansible roles in order to install mongodb with the replication. im using auto scaling group and i need to pass, ec2 instance private ip's to ansible as extra vars. is there any way to do that?
When it's come to rs.initiate() is there any way to add ec2 private ip to mongo cluster when terraform creating the instances.
Not really sure about how it's done in ASGs, probably a combination of user-data and EC2 metadata would be helpful.
But I do it as below in case we have a fixed number of nodes. Posting this answer as it can be helpful to someone in some way.
Using EC2 dynamic inventory scripts.
Ref - https://docs.ansible.com/ansible/2.5/user_guide/intro_dynamic_inventory.html
This is basically a python script i.e ec2.py which gets the instance private IP using tags etc. It comes with a config file named ec2.ini.
Tag your instance in TF script (you add a role tag) -
resource "aws_instance" "ec2" {
....
tags = "${merge(var.tags, map(
"description","mongodb-node",
"role", "mongodb-node",
"Environment", "${local.env}",))}"
}
output "ip" {
value = ["${aws_instance.ec2.private_ip}"]
}
Get the instance private IP in playbook -
- hosts: localhost
connection: local
tasks:
- debug: msg="MongoDB Node IP is - {{ hostvars[groups['tag_role_mongodb-node'][0]].inventory_hostname }}"
Now run the playbook using TF null_resource -
resource null_resource "ansible_run" {
triggers {
ansible_file = "${sha1(file("${path.module}/${var.ansible_play}"))}"
}
provisioner "local-exec" {
command = "ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -i ./ec2.py --private-key ${var.private_key} ${var.ansible_play}"
}
}
You got to make sure AWS related environment variables are present/exported for ansible to fetch AWS EC2 metadata. Also make sure ec2.py is executable.
If you want to get the private IP, change the following config in ec2.ini -
destination_variable = private_ip_address
vpc_destination_variable = private_ip_address

How to Connect EntityFramework Core to Multiple Google CloudSQL Instances Using CloudSQL Proxy?

I have 2 Postgres databases each in their own CloudSQL instance and a .NET web app running in GKE.
Goal: Connect web app utilizing EntityFramework Core to both CloudSQL instances using a single CloudSQL proxy.
I followed this setup and modified it following this S.O. answer.
There is an EF Core DbContext for each CloudSQL Instance.
The context connections are set using 2 environment variables:
new Context1(
{
optionsBuilder.UseNpgsql(Environment.GetEnvironmentVariable("CONNECTION_1"));
});
new Context2(
{
optionsBuilder.UseNpgsql(Environment.GetEnvironmentVariable("CONNECTION_2"));
});
The environment variables are set as:
CONNECTION_1 = "Host=127.0.0.1;Port=5432;Database=postgres;Username=postgres;Password=password"
CONNECTION_2 = "Host=127.0.0.1;Port=5432;Database=postgres;Username=postgres;Password=password2"
Current Behavior:
Context1 interacts with CloudSQL instance1 as normal.
Context2 throws PostgresException "42P01: relation {my_Table_Name} does not exist." when trying to access a table.
Note: "my_Table_Name" is a table in CloudSQL instance2
This leads me to believe Context2 is trying to access CloudSQL instance1 instead of instance2.
How can I point Context2 through the SQL Proxy to CloudSQL instance2?
Basically this:
CONNECTION_1 = "Host=127.0.0.1;Port=5432;Database=postgres;Username=postgres;Password=password"
CONNECTION_2 = "Host=127.0.0.1;Port=5432;Database=postgres;Username=postgres;Password=password2"
means that you are connecting to the exact same Cloud SQL instance, with just two different passwords (same username). Not sure how CONNECTION_2 might even connect to the Cloud SQL instance1 though.
You would want to have something like this:
CONNECTION_1 = "Host=127.0.0.1;Port=5432;Database=postgres;Username=postgres;Password=password"
CONNECTION_2 = "Host=127.0.0.1;Port=5433;Database=postgres;Username=postgres;Password=password2"
and on your cloud-proxy command line (running on the same pod):
-instances=project:region:sqlinstance1=tcp:5432,project:region:sqlinstance2=tcp:5433

Hosting the database separately for Meteor apps

It seems to be a common and safer practice to host the database separately from Meteor apps. That is to say, have an EC2 instance for your Meteor app, and an EC2 instance for your MongoDB, and make them talk to one another.
From what I understand, people do this because it's more secure, and it allows them to deploy newer versions of their app without touching the database.
I'd like to do this with Amazon EC2 alone, as opposed to using another 3rd party service, like Compose.io.
How can I host a Meteor app and its database separately on two EC2 instances, and have them communicate with one another?
It is common practice, and people mostly do it because it offers you the ability to scale them both independently.
As to the how, you'll want to obviously configure each of your Amazon EC2 instances, installing meteor on one, and MongoDB on the other. You'll also need to configure your VPC (Amazon Virtual Private Cloud) so that your MongoDB instance accepts incoming connections on whatever port you specify (default is 27017), so that your Meteor Application can connect.
After that it's just a matter of telling your meteor app where to go to get the database connection. The most secure way of doing this will be to set a couple Environment Variables, named MONGODBSERVER and MONGODBPORT, DBUSER, DBPASSWORD, etc.
You'll then want to set some variables in your server Meteor code, using something like:
Meteor.startup(function() {
var DbUser = process.env.DBUSER;
var DbPassword = process.env.DBPASSWORD;
var MongoDBServer = process.env.MONGODBSERVER;
var MongoDBPort = process.env.MONGODBPORT;
});
And if you're using the native MongoDB Driver, connecting becomes trivial:
var MongoClient = require('mongodb').MongoClient;
MongoClient.connect('mongodb://DbUser:DbPassword#MongoDBServer:MongoDBPort/databasename', function(err, db) {
...
});
Then it's just a matter of constructing your Mongo models using something like:
Temperatures = new Mongo.Collection('temperatures');
Temperatures._ensureIndex({temp: 1, time: 1});
And then taking action on those models in regard to the database:
Temperatures.insert({temp: ftemp, time: Math.floor(Date.now() / 1000)});
I'll also mention that http://modulus.io is a really decent Meteor hosting solution. I'd recommend them, unless you are stuck on using Amazon EC2 instances, which is fine, but more complicated for a simple application.
You need to set an Environment Variable for Mongo where it is hosted
MONGO_URL
mongodb://:#hostingproviderurl:port/xxx?autoReconnect=true&connectTimeoutMS=60000
the correct mongodb:// url string would be provided by the mongodb hosting provider.