Pulumi invalid network configuration for bridge mode ECS service - amazon-ecs

I'm trying to create an ECS Service using Pulumi with task with network mode bridge in order to run multiple tasks on an instance.
When creating the service, pulumi outputs error: Plan apply failed: InvalidParameterException: Network Configuration is not valid for the given networkMode of this task definition. which is not valid.
It seems pulumi provides a networkConfiguration even though this is not permitted when the network mode is bridge:
[urn=urn:pulumi:dev::pulumi::pulumi:pulumi:Stack::pulumi-dev]
+ aws:ecs/service:Service: (create)
[urn=urn:pulumi:dev::pulumi::awsx:x:ecs:EC2Service$aws:ecs/service:Service::test]
cluster : "arn:aws:ecs:eu-central-1:131009595785:cluster/test-12196f9"
deploymentMaximumPercent : 200
deploymentMinimumHealthyPercent: 100
desiredCount : 2
enableEcsManagedTags : false
launchType : "EC2"
loadBalancers : [
[0]: {
containerName : "backend"
containerPort : 3000
targetGroupArn: "arn:aws:elasticloadbalancing:eu-central-1:131009595785:targetgroup/57d096ee-73ab93e/fce1408d3c067066"
}
]
name : "test-3e870ec"
networkConfiguration : {
assignPublicIp: false
securityGroups: [
[0]: "sg-035513ef294414b65"
]
subnets : [
[0]: "subnet-08831ff5642406fc7"
[1]: "subnet-00e3e870707b6aa90"
]
}
schedulingStrategy : "REPLICA"
taskDefinition : "arn:aws:ecs:eu-central-1:131009595785:task-definition/test-aece9bcd:24"
waitForSteadyState : true
Is there a way to avoid setting the networkConfiguration? I can set securityGroups and subnets of the service to [] but there is no way to set assignPublicIp.

Looks like this was not yet supported by pulumi but was fixed in PR 233 with this change.
The fix is included in pulumi-awsx 0.18.2.
A networkConfiguration is now only specified for network mode awsvpc.

Related

Specifying multiple servers in mongoDB connection string prevents it from connecting, but only specifying the primary server works

We have 4 mongoDB servers of which the first one is currently the primary with 3 replicas. If I specify all 4 servers in the connection string it fails to connect at all, but if I just specify the first one it connects fine. This is bad because if the first server fails, it will not be able to connect.
This works:
mongodb://login:password#server1:27017/admin?readPreference=Primary
This does NOT work:
mongodb://login:password#server1:27017,server2:27017,server3:27017,server4:27017/admin?readPreference=Primary
Exception:
A timeout occured after 30000ms selecting a server using CompositeServerSelector{ Selectors = WritableServerSelector, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 } }. Client view of cluster state is { ClusterId : "1", ConnectionMode : "Automatic", Type : "ReplicaSet", State : "Connected", Servers : [{ ServerId: "{ ClusterId : 1, EndPoint : "Unspecified/server1:27017" }", EndPoint: "Unspecified/server1:27017", State: "Disconnected", Type: "Unknown", HeartbeatException: "MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server.
The service trying to connect runs on Kube.
Any idea why this would be?
you need to add the mode to the connection string: replicaSet=myRepl

MongoDb Connection timeout in Azure Datafactory while Sync Data

I have created one dataset in azure datafactory for connect mongodb.
While configuration i have added MongoDb connection string and it's show me connection successful. As per shown in image view
After that i have configure whole pipeline with proper configuration.
Now i am facing issue while run that pipeline.
I am getting below error as Mongodb connection is not valid.
Operation on target Copydataset1 failed: Failure happened on 'Source' side. ErrorCode=UserErrorMongoDbConnectionTimeout,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Connection to MongoDB server is timeout. This is usually caused by invalid connection string.,Source=Microsoft.DataTransfer.Runtime.MongoDbV2Connector,''Type=System.TimeoutException,Message=A timeout occured after 30000ms selecting a server using CompositeServerSelector{ Selectors = ReadPreferenceServerSelector{ ReadPreference = { Mode : Primary } }, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 } }. Client view of cluster state is { ClusterId : "1", ConnectionMode : "Automatic", Type : "Unknown", State : "Disconnected", Servers : [{ ServerId: "{ ClusterId : 1, EndPoint : "192.0.0.1:27017" }", EndPoint: "192.0.0.1:27017", State: "Disconnected", Type: "Unknown" }] }.,Source=MongoDB.Driver.Core,'
NOTE: This ip 192.0.0.1:27017 is only for example.

Error while generating node info file with database.runMigration

I added database.runMigration: true to my build.gradle file but I'm getting this error when running deployNodes. What's causing this?
[ERROR] 14:05:21+0200 [main] subcommands.ValidateConfigurationCli.logConfigurationErrors$node - Error(s) while parsing node configuration:
- for path: "database.runMigration": Unknown property 'runMigration'
Here's my build.gradle's deployNode task
task deployNodes(type: net.corda.plugins.Cordform, dependsOn: ['jar']) {
directory "./build/nodes"
ext.drivers = ['.jdbc_driver']
ext.extraConfig = [
'dataSourceProperties.dataSourceClassName' : "org.postgresql.ds.PGSimpleDataSource",
'dataSourceProperties.dataSource.user' : "corda",
'dataSourceProperties.dataSource.password' : "corda1234",
'database.transactionIsolationLevel' : 'READ_COMMITTED',
'database.runMigration' : "true"
]
nodeDefaults {
projectCordapp {
deploy = false
}
cordapp project(':cordapp-contracts-states')
cordapp project(':cordapp')
}
node {
name "O=HUS,L=Helsinki,C=FI"
p2pPort 10008
rpcSettings {
address "localhost:10009"
adminAddress "localhost:10049"
}
webPort 10017
rpcUsers = [[ user: "user1", "password": "test", "permissions": ["ALL"]]]
extraConfig = ext.extraConfig + [
'dataSourceProperties.dataSource.url' :
"jdbc:postgresql://localhost:5432/hus_db?currentSchema=corda_schema"
]
drivers = ext.drivers
}
}
The database.runMigration is a Corda Enterprise property only.
To control database migration in Corda Open Source use initialiseSchema.
initialiseSchema
Boolean which indicates whether to update the database schema at startup (or create the schema when node starts for the first time). If set to false on startup, the node will validate if it’s running against a compatible database schema.
Default: true
You may refer to the below link to look out for other database properties which you can set.
https://docs.corda.net/corda-configuration-file.html

MongoDB Go driver looking on localhost when should not

I'm not a Go guy, just need to use a plugin written in Go and I'm having some trouble between plugin and MongoDB.
The error is:
server selection error: server selection timeout
current topology: Type: Unknown
Servers:
Addr: localhost:27017, Type: Unknown, State: Connected, Avergage RTT: 0, Last error: dial tcp 127.0.0.1:27017: connect: connection refused
exit status 1
My configuration:
time=“2019-09-03T16:29:35Z” level=debug msg=“Host: ip-XXX-XX-XX-XXX.sa-east-1.compute.internal”
time=“2019-09-03T16:29:35Z” level=debug msg=“Port: 27017”
time=“2019-09-03T16:29:35Z” level=debug msg=“Username: user”
time=“2019-09-03T16:29:35Z” level=debug msg=“Password: user123*”
time=“2019-09-03T16:29:35Z” level=debug msg=“DBName: dbBackend”
The plugin snippet that performs the connection:
addr := fmt.Sprintf("mongodb://%s:%s", m.Host, m.Port)
to := 60 * time.Second
opts := options.ClientOptions{
ConnectTimeout: &to,
}
opts.ApplyURI(addr)
if m.Username != "" && m.Password != "" {
opts.Auth = &options.Credential{
AuthSource: m.DBName,
Username: m.Username,
Password: m.Password,
PasswordSet: true,
}
}
client, err := mongo.Connect(context.TODO(), &opts)
if err != nil {
return m, errors.Errorf("couldn't start mongo backend. error: %s\n", err)
}
err1 := client.Ping(context.TODO(), nil)
if err1 != nil {
log.Fatal(err1) // error happens here
}
log.Debugf("MONGO CONNECTED")
m.Conn = client
return m, nil
I just can't realize why the mongo driver is looking on localhost if I'm setting the address of my mongoDB server.
EDIT 1
My db has replica set configured only to use change streams.
This is my RS configuration:
{
"_id" : "rs0",
"version" : 69559,
"protocolVersion" : 1,
"writeConcernMajorityJournalDefault" : true,
"members" : [
{
"_id" : 0,
"host" : "localhost:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatIntervalMillis" : 2000,
"heartbeatTimeoutSecs" : 10,
"electionTimeoutMillis" : 10000,
"catchUpTimeoutMillis" : -1,
"catchUpTakeoverDelayMillis" : 30000,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
},
"replicaSetId" : ObjectId("5cf684c3c0db3f53727d1bb4")
}
}
Any help solving it appreciated.
Thanks
why the mongo driver is looking on localhost if I'm setting the address of my mongoDB server.
When mongo-go-driver's client is connecting to a MongoDB deployment, it will perform Server Discovery and Monitoring to discovers one or more servers (MongoDB being a distributed database by nature). One of the early steps is to begin monitoring the topology by invoking isMaster command on all servers. Based on the output of isMaster the client will try to contact those servers. In the case of Replica Set (your case), the client strives to connect to the primary server (from isMaster.primary).
However, the hostname address is not a Fully Qualified Domain Name (FQDN) to be resolvable from the client's machine. The client's machine trying to connect to localhost defined as the replica set primary, thus failed to make a connection. Also, this is why you're seeing a message status where current topology: Type: Unknown but State: Connected. It failed to discover the deployment topology even before able to select a server to execute the command (ping)
You can solve this by setting resolvable hostnames for the value of the members field in the replica set configuration. In addition, when possible, use a logical DNS hostname instead of an ip address, as this avoids configuration changes due to ip address changes.
You can change the replica set hostnames using rs.reconfig() i.e:
cfg = rs.conf()
cfg.members[1].host = "<RESOLVABLE HOSTNAME>:<PORT NUMBER>"
rs.reconfig(cfg)
In your case, where there's only one replica set member it's quite straight forward. However if you're in production mode and have more than one members you can follow the steps outlined in Change Hostnames in a Replica Set where there are two options:
Change Hostnames without disrupting availability
Change Hostnames at the same time (one-go)
Having said all the explanation above,
alternatively, as your replica set deployment is only one server (development mode) you can set the connection mode to direct via ClientOptions.SetDirect(). Which specifies whether the client should connect directly to a server instead of auto-discovering other servers in the cluster (although this means you have no redundancy) i.e.:
opts := options.ClientOptions{ ConnectTimeout: &timeoutVariable}
opts.SetDirect(true)
opts.ApplyURI(addr)
client, err := mongo.Connect(connect.TODO(), &opts)

Mongos can add replica set, but can't connect

I'm setting up a sharded mongo cluster. I have two replica sets consisting of two nodes each, a replica set of three config servers, and a single mongos instance.
I have been able to add the replica set to the mongos instance:
sh.addShard("rs1/shard-rs01-s01");
This returns {"ok" : 1} and the same is true of the second replica set.
However when I try to do any database operations such as db.test.insert(...) I receive this error:
2017-02-23T01:17:28.599+0000 I ASIO [CatalogManagerReplacer]
Connecting to shard-RS01-S01:27017
2017-02-23T01:17:28.600+0000 I ASIO [CatalogManagerReplacer]
Connecting to config-01:27019
2017-02-23T01:17:28.603+0000 I ASIO [CatalogManagerReplacer]
Successfully connected to config-01:27019
2017-02-23T01:17:48.600+0000 I ASIO [CatalogManagerReplacer] Failed to connect to shard-RS01-S01:27017 - ExceededTimeLimit: Operation timed out
I double checked that the firewall wasn't blocking the connection by disabling it on all of the systems. For what it is worth, on the node that contains the mongos instance I can connect to the replica-set directly through the command like using this command regardless of the firewall state:
mongo --host rs1/shard-rs01-s01:27017
So I am fairly sure it is not a firewall issue. Anyone have any ideas?
Here's a shard map of the setup if it is useful for anyone able to help...
mongos> db.runCommand("getShardMap")
{
"map" : {
"config" : "rs0/config-01:27019,config-02:27019,config-03:27019",
"config-01:27019" : "rs0/config-01:27019,config-02:27019,config-03:27019",
"config-02:27019" : "rs0/config-01:27019,config-02:27019,config-03:27019",
"config-03:27019" : "rs0/config-01:27019,config-02:27019,config-03:27019",
"rs0/config-01:27019,config-02:27019,config-03:27019" : "rs0/config-01:27019,config-02:27019,config-03:27019",
"rs1" : "rs1/shard-RS01-S01:27017,shard-RS01-S02:27017",
"rs1/shard-RS01-S01:27017,shard-RS01-S02:27017" : "rs1/shard-RS01-S01:27017,shard-RS01-S02:27017",
"rs2" : "rs2/shard-RS02-S03:27017,shard-RS02-S04:27017",
"rs2/shard-RS02-S03:27017,shard-RS02-S04:27017" : "rs2/shard-RS02-S03:27017,shard-RS02-S04:27017",
"shard-RS01-S01:27017" : "rs1/shard-RS01-S01:27017,shard-RS01-S02:27017",
"shard-RS01-S02:27017" : "rs1/shard-RS01-S01:27017,shard-RS01-S02:27017",
"shard-RS02-S03:27017" : "rs2/shard-RS02-S03:27017,shard-RS02-S04:27017",
"shard-RS02-S04:27017" : "rs2/shard-RS02-S03:27017,shard-RS02-S04:27017"
},
"ok" : 1
}
you need to initialize your mongos.
rs.initiate( { _id: "configReplSet", configsvr: true, members: [ { _id: 0, host: "mongo-config-1:27017" }] } )