I added database.runMigration: true to my build.gradle file but I'm getting this error when running deployNodes. What's causing this?
[ERROR] 14:05:21+0200 [main] subcommands.ValidateConfigurationCli.logConfigurationErrors$node - Error(s) while parsing node configuration:
- for path: "database.runMigration": Unknown property 'runMigration'
Here's my build.gradle's deployNode task
task deployNodes(type: net.corda.plugins.Cordform, dependsOn: ['jar']) {
directory "./build/nodes"
ext.drivers = ['.jdbc_driver']
ext.extraConfig = [
'dataSourceProperties.dataSourceClassName' : "org.postgresql.ds.PGSimpleDataSource",
'dataSourceProperties.dataSource.user' : "corda",
'dataSourceProperties.dataSource.password' : "corda1234",
'database.transactionIsolationLevel' : 'READ_COMMITTED',
'database.runMigration' : "true"
]
nodeDefaults {
projectCordapp {
deploy = false
}
cordapp project(':cordapp-contracts-states')
cordapp project(':cordapp')
}
node {
name "O=HUS,L=Helsinki,C=FI"
p2pPort 10008
rpcSettings {
address "localhost:10009"
adminAddress "localhost:10049"
}
webPort 10017
rpcUsers = [[ user: "user1", "password": "test", "permissions": ["ALL"]]]
extraConfig = ext.extraConfig + [
'dataSourceProperties.dataSource.url' :
"jdbc:postgresql://localhost:5432/hus_db?currentSchema=corda_schema"
]
drivers = ext.drivers
}
}
The database.runMigration is a Corda Enterprise property only.
To control database migration in Corda Open Source use initialiseSchema.
initialiseSchema
Boolean which indicates whether to update the database schema at startup (or create the schema when node starts for the first time). If set to false on startup, the node will validate if it’s running against a compatible database schema.
Default: true
You may refer to the below link to look out for other database properties which you can set.
https://docs.corda.net/corda-configuration-file.html
Related
I want to access my postgres cluster within my kubernetes within azure cloud with a client (e.g. pgadmin) to search manuelly through data.
At the moment my complete cluster only has 1 ingress that is pointing to a self written api gateway.
I found a few ideas online and tried to add a load balancer in kubernetesd without success.
My postgress cluster in terraform:
resource "helm_release" "postgres-cluster" {
name = "postgres-cluster"
repository = "https://charts.bitnami.com/bitnami"
chart = "postgresql-ha"
namespace = var.kube_namespace
set {
name = "global.postgresql.username"
value = var.postgresql_username
}
set {
name = "global.postgresql.password"
value = var.postgresql_password
}
}
Results in a running cluster:
Now my try to add a load balancer:
resource "kubernetes_manifest" "postgresql-loadbalancer" {
manifest = {
"apiVersion" = "v1"
"kind" = "Service"
"metadata" = {
"name" = "postgres-db-lb"
"namespace" = "${var.kube_namespace}"
}
"spec" = {
"selector" = {
"app.kubernetes.io/name" = "postgresql-ha"
}
"type" = "LoadBalancer"
"ports" = [{
"port" = "5432"
"targetPort" = "5432"
}]
}
}
}
Will result in:
But still no success if I try to connect to the external IP and Port:
Found the answer - it was an internal Firewall I was never thinking of. The code is absolutly correct, a loadbalancer do work here.
Created an AWS DMS pipeline:
Source endpoint - MongoDB
Target endpoint - RDS Postgres SQL
Successfully did all the security configuration, and both endpoints returned successful while testing it.
For the MongoDB source, I am using one of the three replicas sets with a username and a password that is not the admin username.
I also added the privilege "changeStream" in the replica set user.
But when starting the DMS migration task getting this error in cloud watch.
Encountered an error while initializing change stream: 'not authorized on admin to execute command
{ aggregate: 1, pipeline: [ { $changeStream: { fullDocument: "updateLookup", startAtOperationTime: Timestamp(1656005815, 0),
allChangesForCluster: true } }, "ok" : { "$numberDouble" : "0.0" },
"errmsg" : "not authorized on admin to execute command { aggregate: 1, pipeline: [ { $changeStream: { fullDocument:
\"updateLookup\", startAtOperationTime: Timestamp(1656005815, 0), allChangesForCluster: true } },
74f1-4aab-9ca1-f964ab655777\ (change_streams_capture.c:356)
Assuming this is due to some missing privileges in mongo replica sets USER.
I'm trying to host a Grafana 7.4 image in ECS Fargate using an EFS volume for persistent storage.
Using Terraform I have created the required resource and given the task access to the EFS volume via an "access point"
resource "aws_efs_access_point" "default" {
file_system_id = aws_efs_file_system.default.id
posix_user {
gid = 0
uid = 472
}
root_directory {
path = "/opt/data"
creation_info {
owner_gid = 0
owner_uid = 472
permissions = "600"
}
}
}
Note that I have set owner permissions as per the guides in https://grafana.com/docs/grafana/latest/installation/docker/#migrate-from-previous-docker-containers-versions (I've tried both group id 0 and 1 as the documentation seems to be inconsistent on the gid).
Using a base alpine image in place of the grafana image I've confirmed the directory /var/lib/grafana exists within container with the correct uid and gids set. However on attempting to run the grafana image I get the error message
GF_PATHS_DATA='/var/lib/grafana' is not writable.
I am launching the task with a terraformed task definition.
resource "aws_ecs_task_definition" "default" {
family = "${var.name}"
container_definitions = "${data.template_file.container_definition.rendered}"
memory = "${var.memory}"
cpu = "${var.cpu}"
requires_compatibilities = [
"FARGATE"
]
network_mode = "awsvpc"
execution_role_arn = "arn:aws:iam::REDACTED_ID:role/ecsTaskExecutionRole"
volume {
name = "${var.name}-volume"
efs_volume_configuration {
file_system_id = aws_efs_file_system.default.id
transit_encryption = "ENABLED"
root_directory = "/opt/data"
authorization_config {
access_point_id = aws_efs_access_point.default.id
}
}
}
tags = {
Product = "${var.name}"
}
}
With the container definition
[
{
"portMappings": [
{
"hostPort": 80,
"protocol": "tcp",
"containerPort": 80
}
],
"mountPoints": [
{
"sourceVolume": "${volume_name}",
"containerPath": "/var/lib/grafana",
"readOnly": false
}
],
"cpu": 0,
"secrets": [
...
],
"environment": [],
"image": "grafana/grafana:7.4.3",
"name": "${name}",
"user": "472:0"
}
]
For "user" I have tried "grafana", "742:0", "742" and "742:1" when trying gid 1.
I believe the terraform, security groups, mount_targets, etc... are all correct as I can get an alpine image to:
ls -lash /var/lib
> drw------- 2 472 root 6.0K Mar 12 11:22 grafana
I believe you have a problem, because AWS ECS https://github.com/aws/containers-roadmap/issues/938
Anyway, file system approach doesn't seems to be very cloud friendly (especially if you want to scale horizontally: problems with concurrent writes from multiple tasks, IOPs limitations, ...). Just provision proper DB (e.g. Aurora RDS Mysql, multi A-Z cluster if you need HA) and you will have nice opsless AWS deployment.
I'm trying to enable PLAIN authentication security over a mongodb replica shard managed with OpsManager following their documentation https://docs.opsmanager.mongodb.com/v4.0/tutorial/enable-ldap-authentication-for-group/ .
The issue I'm facing is at the automation-agent trying to get mongoS status while restarting after enabling security. Please see the error output below:
<mongos_5> [09:18:19.711] Failed to compute states :
<mongos_5> [09:18:19.711] Error calling ComputeState : <mongos_5> [09:18:19.632] Error getting current config from running mongo using conn params = mongos01:27017 (local=false) :
<mongos_5> [09:18:19.632] Error getting pid for mongos01:27017 (local=false) :
<mongos_5> [09:18:19.632] Error running command for runCommandWithTimeout(dbName=admin, cmd=[{serverStatus 1} {locks false} {recordStats false}]) :
result={"$clusterTime":{"clusterTime":6808443558471663617,"signature": {"hash":"e44BxV30B7dTpampo4VZsVuio7E=","keyId":6808441655801151517}},"code":13,"codeName":"Unauthorized",
"errmsg":"command serverStatus requires authentication","ok":0,"operationTime":6808443558471663617} connection=&{mongos01:27017 (local=false) 2 true 0xc4207b21a0 2020-03-26 09:18:19.627337419 +0000 UTC 0xc4207bdef0 <nil> }
identityUsed= : command serverStatus requires authentication
I noticed that even if opsmanager is not able to get the status the security was enabled successfully and PLAIN authentication mechanism works but the status hangs at
Start the process ... Start MongoDB process
I tried this over the API following mongodb-labs repo https://github.com/mongodb-labs/mms-api-examples/blob/master/automation/api_usage_example/configs/security_ldap_cluster.json but also manually following mongodb docs but everytime I'm facing the same error.
After all I enabled LDAP(PLAIN) only for mongo in mongoconfig file (see below the ops manager API snippet call example), and avoid enable in opsmanager for the agents also.
{
"args2_6": {
"net": {
"port": 28001
},
"replication": {
"replSetName": "rs0"
},
"storage": {
"dbPath": "/data/mongo"
},
"systemLog": {
"destination": "file",
"path": "/data/mongo/mongodb.log"
},
"security": {
"authorization": "enabled"
},
"setParameter": {
"saslauthdPath": "",
"authenticationMechanisms": "PLAIN,MONGO-CR,SCRAM-SHA-256",
}
}, ...
I am trying to use cygnus with Mongo DB, but no data have been persisted in the data base.
Here is the notification got in cygnus:
15/07/21 14:48:01 INFO handlers.OrionRestHandler: Starting transaction (1437482681-118-0000000000)
15/07/21 14:48:01 INFO handlers.OrionRestHandler: Received data ({ "subscriptionId" : "55a73819d0c457bb20b1d467", "originator" : "localhost", "contextResponses" : [ { "contextElement" : { "type" : "enocean", "isPattern" : "false", "id" : "enocean:myButtonA", "attributes" : [ { "name" : "ButtonValue", "type" : "", "value" : "ON", "metadatas" : [ { "name" : "TimeInstant", "type" : "ISO8601", "value" : "2015-07-20T21:29:56.509293Z" } ] } ] }, "statusCode" : { "code" : "200", "reasonPhrase" : "OK" } } ]})
15/07/21 14:48:01 INFO handlers.OrionRestHandler: Event put in the channel (id=1454120446, ttl=10)
Here is my agent configuration:
cygnusagent.sources = http-source
cygnusagent.sinks = OrionMongoSink
cygnusagent.channels = mongo-channel
#=============================================
# source configuration
# channel name where to write the notification events
cygnusagent.sources.http-source.channels = mongo-channel
# source class, must not be changed
cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
# listening port the Flume source will use for receiving incoming notifications
cygnusagent.sources.http-source.port = 5050
# Flume handler that will parse the notifications, must not be changed
cygnusagent.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.OrionRestHandler
# URL target
cygnusagent.sources.http-source.handler.notification_target = /notify
# Default service (service semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service = def_serv
# Default service path (service path semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service_path = def_servpath
# Number of channel re-injection retries before a Flume event is definitely discarded (-1 means infinite retries)
cygnusagent.sources.http-source.handler.events_ttl = 10
# Source interceptors, do not change
cygnusagent.sources.http-source.interceptors = ts gi
# TimestampInterceptor, do not change
cygnusagent.sources.http-source.interceptors.ts.type = timestamp
# GroupinInterceptor, do not change
cygnusagent.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.GroupingInterceptor$Builder
# Grouping rules for the GroupingInterceptor, put the right absolute path to the file if necessary
# See the doc/design/interceptors document for more details
cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file = /home/egm_demo/usr/fiware-cygnus/conf/grouping_rules.conf
# ============================================
# OrionMongoSink configuration
# sink class, must not be changed
cygnusagent.sinks.mongo-sink.type = com.telefonica.iot.cygnus.sinks.OrionMongoSink
# channel name from where to read notification events
cygnusagent.sinks.mongo-sink.channel = mongo-channel
# FQDN/IP:port where the MongoDB server runs (standalone case) or comma-separated list of FQDN/IP:port pairs where the MongoDB replica set members run
cygnusagent.sinks.mongo-sink.mongo_hosts = 127.0.0.1:27017
# a valid user in the MongoDB server (or empty if authentication is not enabled in MongoDB)
cygnusagent.sinks.mongo-sink.mongo_username =
# password for the user above (or empty if authentication is not enabled in MongoDB)
cygnusagent.sinks.mongo-sink.mongo_password =
# prefix for the MongoDB databases
#cygnusagent.sinks.mongo-sink.db_prefix = kura
# prefix pro the MongoDB collections
#cygnusagent.sinks.mongo-sink.collection_prefix = button
# true is collection names are based on a hash, false for human redable collections
cygnusagent.sinks.mongo-sink.should_hash = false
# ============================================
# mongo-channel configuration
# channel type (must not be changed)
cygnusagent.channels.mongo-channel.type = memory
# capacity of the channel
cygnusagent.channels.mongo-channel.capacity = 1000
# amount of bytes that can be sent per transaction
cygnusagent.channels.mongo-channel.transactionCapacity = 100
Here is my rule :
{
"grouping_rules": [
{
"id": 1,
"fields": [
"button"
],
"regex": ".*",
"destination": "kura",
"fiware_service_path": "/kuraspath"
}
]
}
Any ideas of what I have missed? Thanks in advance for your help!
This configuration parameter is wrong:
cygnusagent.sinks = OrionMongoSink
According to your configuration, it must be mongo-sink (I mean, you are configuring a Mongo sink named mongo-sink when you configure lines such as cygnusagent.sinks.mongo-sink.type).
In addition, I would recommend you to not using the grouping rules feature; it is an advanced feature about sending the data to a collection different than the default one, and in a first stage I would play with the default behaviour. Thus, my recommendation is to leave the path to the file in cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file, but comment all the JSON within it :)