Fivetran Connector - connection_type not getting persisted - pulumi

I'm trying to use the Fivetran API to dynamically create Connectors. One problem I've run into is that when I set connection_type = PrivateLink, it doesn't work.
Specifically, when I try to set connection_type on the Connector itself, I get an error:
error: fivetran:index/connector:Connector resource 'my-connector' has a problem: Computed attribute cannot be set. Examine values at 'Connector.Config.ConnectionType'.
When I set the connection_type on the Destination which the Connectors belong to... nothing happens! I look at the Connectors in the UI and they all have their connection_type set to Direct Connection, even though the Destination is supposed to be set to PrivateLink.
Is there some trick to get this working?
I'm interacting with Fivetran via Pulumi, not direct REST calls. The config for a Connector looks something like this:
destination = Destination(resource_name = destination_name,
opts = resource_options,
group_id = fivetran_group.id,
region = "AWS_US_EAST_1",
service= "big_query",
config = {
"project_id" : "SomeProjectId",
"connection_type": "PrivateLink",
"data_set_location" : "AWS_US_EAST_1"
},
run_setup_tests = False,
time_zone_offset = "-5")
#...
connector = Connector(resource_name = db + "_data_src",
opts = resource_options,
group_id = fivetran_group.id,
service = "postgres",
paused = True,
pause_after_trial = True,
sync_frequency = 60,
destination_schema = {
"prefix": "xyz_" + db.lower()
},
config = {
"host": "very.long.aws.hostname.com",
"port": 5432,
"database": "XYZ_" + db.upper(),
# "connection_type": "PrivateLink",
"user": "someUser",
"password": "somePassword",
"update_method": "XMIN"
})

Related

Terraform enable or disable a resource conditionally

My requirement is to create or delete a resource by specifying enable flag true or false (In case of false the resource should get deleted, in case of true the resource should get created)
Kindly refer below code - here I am creating "confluent topic" resource and calling it dynamically using for_each condition.
Confluent Topic creation
topic.tf file:
resource "confluent_kafka_topic" "topic" {
kafka_cluster {
id = confluent_kafka_cluster.dedicated.id
}
for_each = { for t in var.topic_name : t.topic_name => t }
topic_name = each.value["topic_name"]
partitions_count = each.value["partitions_count"]
rest_endpoint = confluent_kafka_cluster.dedicated.rest_endpoint
credentials {
key = confluent_api_key.app-manager-kafka-api-key.id
secret = confluent_api_key.app-manager-kafka-api-key.secret
}
}
Variable declared as:
variable "topic_name" {
type = list(map(string))
default = [{
"topic_name" = "default_topic"
}]
}
And finally executing it through DEV.tfvars file:
topic_name = [
{
topic_name = "avro-topic-1"
partitions_count = "6"
},
{
topic_name = "json-topic-1"
partitions_count = "8"
},
]
The above code execution works fine and I am able to create and delete multiple resources. I want to modify it further and add a flag/toggle to create or delete a resource.
Example as shown below:
topic_name = [
{
topic_name = "avro-topic-1"
partitions_count = "6"
**enable = true #this flag will create the resource**
},
{
topic_name = "json-topic-1"
partitions_count = "8"
**enable = false #this flag will delete the resource**
},
]
Kindly help suggest how it can be achieved and if there is any different approach to follow.
As mentioned in my comment, I think this can be achieved with the following change:
resource "confluent_kafka_topic" "topic" {
for_each = { for t in var.topic_name : t.topic_name => t if t.enable }
kafka_cluster {
id = confluent_kafka_cluster.dedicated.id
}
topic_name = each.value["topic_name"]
partitions_count = each.value["partitions_count"]
rest_endpoint = confluent_kafka_cluster.dedicated.rest_endpoint
credentials {
key = confluent_api_key.app-manager-kafka-api-key.id
secret = confluent_api_key.app-manager-kafka-api-key.secret
}
}
Additionally, for_each should probably be at the top of the resource block to make sure it is visible immediately to the reader. The if t.enable part will make sure that for_each will create a resource only when the variable key has the enabled = true.

How to reach PostgresCluster in Azure Cloud with a client from local?

I want to access my postgres cluster within my kubernetes within azure cloud with a client (e.g. pgadmin) to search manuelly through data.
At the moment my complete cluster only has 1 ingress that is pointing to a self written api gateway.
I found a few ideas online and tried to add a load balancer in kubernetesd without success.
My postgress cluster in terraform:
resource "helm_release" "postgres-cluster" {
name = "postgres-cluster"
repository = "https://charts.bitnami.com/bitnami"
chart = "postgresql-ha"
namespace = var.kube_namespace
set {
name = "global.postgresql.username"
value = var.postgresql_username
}
set {
name = "global.postgresql.password"
value = var.postgresql_password
}
}
Results in a running cluster:
Now my try to add a load balancer:
resource "kubernetes_manifest" "postgresql-loadbalancer" {
manifest = {
"apiVersion" = "v1"
"kind" = "Service"
"metadata" = {
"name" = "postgres-db-lb"
"namespace" = "${var.kube_namespace}"
}
"spec" = {
"selector" = {
"app.kubernetes.io/name" = "postgresql-ha"
}
"type" = "LoadBalancer"
"ports" = [{
"port" = "5432"
"targetPort" = "5432"
}]
}
}
}
Will result in:
But still no success if I try to connect to the external IP and Port:
Found the answer - it was an internal Firewall I was never thinking of. The code is absolutly correct, a loadbalancer do work here.

Receiving 404 when trying to edit a datasource despite being able to retrieve it's details

I have a datasource created in Grafana and attempting to update it to refresh the bearer token for auth access.
However, I'm receiving a 404 Not Found error from the grafana api when making a request to localhost:3000/api/datasources/uid/:uid with a uid just received from the datasources/name api - attempting to update as per the documentation https://grafana.com/docs/grafana/latest/developers/http_api/data_source/#update-an-existing-data-source
I'm using the grafana opensource docker container with the Infinity plugin.
docker run -d -p 3000:3000 --name=grafana -e "GF_INSTALL_PLUGINS=yesoreyeram-infinity-datasource" grafana/grafana-oss
I'm able to create a datasource via the api, just can't update an existing one.
My code is:-
grafana_api_token = '<my api token>'
new_access_token = '<my new bearer token>'
my_data_source = 'my_data_source'
grafana_header = {"authorization": f"Bearer {grafana_api_token}", "content-type":"application/json;charset=UTF-8"}
grafana_datasource_url = f"http://localhost:3000/api/datasources/name/{my_data_source}"
firebolt_datasource_resp = get(url=grafana_datasource_url, headers=grafana_header)
full_datasource = loads(firebolt_datasource_resp.content.decode("utf-8"))
datasource_uid = full_datasource["uid"]
update_token_url = f"http://localhost:3000/api/datasources/uid/{datasource_uid}"
new_data = {"id": full_datasource["id"],
"uid": full_datasource["uid"],
"orgId": full_datasource["orgId"],
"name": "new_data_source",
"type": full_datasource["type"],
"access": full_datasource["access"],
"url": full_datasource["url"],
"user": full_datasource["user"],
"database": full_datasource["database"],
"basicAuth": full_datasource["basicAuth"],
"basicAuthUser": full_datasource["basicAuthUser"],
"withCredentials": full_datasource["withCredentials"],
"isDefault": full_datasource["isDefault"],
"jsonData": full_datasource["jsonData"],
"secureJsonData": {
"bearerToken": new_access_token
}
}
update_bearer_token_resp = post(url=update_token_url, data=dumps(new_data), headers=grafana_header)
Oh, oh, oh, idiot mode. Using post rather than put. Doh.

Unable to connect to RDS Aurora DB locally

I have a somewhat basic understanding of Cloud Architecture and thought I would try to spin up a PostgreSQL DB in Terraform. I am using Secret Manager to store credentials...
resource "random_password" "password" {
length = 16
special = true
override_special = "_%#"
}
resource "aws_secretsmanager_secret" "secret" {
name = "admin"
description = "Database admin user password"
}
resource "aws_secretsmanager_secret_version" "version" {
secret_id = aws_secretsmanager_secret.secret.id
secret_string = <<EOF
{
"username": "db_user",
"password": "${random_password.password.result}"
}
EOF
}
locals {
db_credentials = jsondecode(data.aws_secretsmanager_secret_version.credentials.secret_string)
}
And an AuoraDB instance which should be publically accessible with the following code
resource "aws_rds_cluster" "cluster-demo" {
cluster_identifier = "aurora-cluster-demo"
database_name = "test_db"
master_username = local.db_credentials["username"]
master_password = local.db_credentials["password"]
port = 5432
engine = "aurora-postgresql"
engine_version = "12.7"
apply_immediately = true
skip_final_snapshot = "true"
}
// child instances inherit the same config
resource "aws_rds_cluster_instance" "cluster_instance" {
identifier = "aurora-cluster-demo-instance"
cluster_identifier = aws_rds_cluster.cluster-demo.id
engine = aws_rds_cluster.cluster-demo.engine
engine_version = aws_rds_cluster.cluster-demo.engine_version
instance_class = "db.r4.large"
publicly_accessible = true # Remove
}
When I terraform apply this, everything gets created as expected, but when I run psql -h <ENDPOINT_TO_CLUSTER> I get prompted to enter the password for admin. Going to the secrets portal copying the password and entering yields:
FATAL: password authentication failed for user "admin"
Similarily, if I try:
psql --username=db_user --host=<ENDPOINT_TO_CLUSTER> --port=5432
I am prompted as expected, to enter the password for db_user, which yields:
psql: FATAL: database "db_user" does not exist
Edit 1
secrets.tf
resource "random_password" "password" {
length = 16
special = true
override_special = "_%#"
}
resource "aws_secretsmanager_secret" "secret" {
name = "admin"
description = "Database admin user password"
}
resource "aws_secretsmanager_secret_version" "version" {
secret_id = aws_secretsmanager_secret.secret.id
secret_string = <<EOF
{
"username": "db_user",
"password": "${random_password.password.result}"
}
EOF
}
database.tf
resource "aws_rds_cluster" "cluster-demo" {
cluster_identifier = "aurora-cluster-demo"
database_name = "test_db"
master_username = "db_user"
master_password = random_password.password.result
port = 5432
engine = "aurora-postgresql"
engine_version = "12.7"
apply_immediately = true
skip_final_snapshot = "true"
}
// child instances inherit the same config
resource "aws_rds_cluster_instance" "cluster_instance" {
identifier = "aurora-cluster-demo-instance"
cluster_identifier = aws_rds_cluster.cluster-demo.id
engine = aws_rds_cluster.cluster-demo.engine
engine_version = aws_rds_cluster.cluster-demo.engine_version
instance_class = "db.r4.large"
publicly_accessible = true # Remove
}
output "db_user" {
value = aws_rds_cluster.cluster-demo.master_username
}
You're doing a data lookup named data.aws_secretsmanager_secret_version.credentials but you don't show the Terraform code for that. Terraform is going to do that lookup before it updates the aws_secretsmanager_secret_version. So the username and password it is configuring the DB with is going to be pulled from the previous version of the secret, not the new version you are creating when you run apply.
You should never have both a data and a resource in your Terraform that refer to the same thing. Always use the resource if you have it, and only use data for things that aren't being managed by Terraform.
Since you have the resource itself available in your Terraform code (and also the random_password resource), you shouldn't be using a data lookup at all. If you pull the value from one of the resources, then Terraform will handle the order of creation/updates correctly.
For example:
locals {
db_credentials = jsondecode(aws_secretsmanager_secret_version.version.secret_string)
}
resource "aws_rds_cluster" "cluster-demo" {
master_username = local.db_credentials["username"]
master_password = local.db_credentials["password"]
Or just simplify it and get rid of the jsondecode step:
resource "aws_rds_cluster" "cluster-demo" {
master_username = "db_user"
master_password = random_password.password.result
I also suggest adding a few Terraform outputs to help you diagnose this type of issue. The following will let you see exactly what username and password Terraform applied to the database:
output "db_user" {
value = aws_rds_cluster.cluster-demo.master_username
}
output "db_password" {
value = aws_rds_cluster.cluster-demo.master_password
sensitive = true
}

Fiware cygnus: no data have been persisted in mongo DB

I am trying to use cygnus with Mongo DB, but no data have been persisted in the data base.
Here is the notification got in cygnus:
15/07/21 14:48:01 INFO handlers.OrionRestHandler: Starting transaction (1437482681-118-0000000000)
15/07/21 14:48:01 INFO handlers.OrionRestHandler: Received data ({ "subscriptionId" : "55a73819d0c457bb20b1d467", "originator" : "localhost", "contextResponses" : [ { "contextElement" : { "type" : "enocean", "isPattern" : "false", "id" : "enocean:myButtonA", "attributes" : [ { "name" : "ButtonValue", "type" : "", "value" : "ON", "metadatas" : [ { "name" : "TimeInstant", "type" : "ISO8601", "value" : "2015-07-20T21:29:56.509293Z" } ] } ] }, "statusCode" : { "code" : "200", "reasonPhrase" : "OK" } } ]})
15/07/21 14:48:01 INFO handlers.OrionRestHandler: Event put in the channel (id=1454120446, ttl=10)
Here is my agent configuration:
cygnusagent.sources = http-source
cygnusagent.sinks = OrionMongoSink
cygnusagent.channels = mongo-channel
#=============================================
# source configuration
# channel name where to write the notification events
cygnusagent.sources.http-source.channels = mongo-channel
# source class, must not be changed
cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
# listening port the Flume source will use for receiving incoming notifications
cygnusagent.sources.http-source.port = 5050
# Flume handler that will parse the notifications, must not be changed
cygnusagent.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.OrionRestHandler
# URL target
cygnusagent.sources.http-source.handler.notification_target = /notify
# Default service (service semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service = def_serv
# Default service path (service path semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service_path = def_servpath
# Number of channel re-injection retries before a Flume event is definitely discarded (-1 means infinite retries)
cygnusagent.sources.http-source.handler.events_ttl = 10
# Source interceptors, do not change
cygnusagent.sources.http-source.interceptors = ts gi
# TimestampInterceptor, do not change
cygnusagent.sources.http-source.interceptors.ts.type = timestamp
# GroupinInterceptor, do not change
cygnusagent.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.GroupingInterceptor$Builder
# Grouping rules for the GroupingInterceptor, put the right absolute path to the file if necessary
# See the doc/design/interceptors document for more details
cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file = /home/egm_demo/usr/fiware-cygnus/conf/grouping_rules.conf
# ============================================
# OrionMongoSink configuration
# sink class, must not be changed
cygnusagent.sinks.mongo-sink.type = com.telefonica.iot.cygnus.sinks.OrionMongoSink
# channel name from where to read notification events
cygnusagent.sinks.mongo-sink.channel = mongo-channel
# FQDN/IP:port where the MongoDB server runs (standalone case) or comma-separated list of FQDN/IP:port pairs where the MongoDB replica set members run
cygnusagent.sinks.mongo-sink.mongo_hosts = 127.0.0.1:27017
# a valid user in the MongoDB server (or empty if authentication is not enabled in MongoDB)
cygnusagent.sinks.mongo-sink.mongo_username =
# password for the user above (or empty if authentication is not enabled in MongoDB)
cygnusagent.sinks.mongo-sink.mongo_password =
# prefix for the MongoDB databases
#cygnusagent.sinks.mongo-sink.db_prefix = kura
# prefix pro the MongoDB collections
#cygnusagent.sinks.mongo-sink.collection_prefix = button
# true is collection names are based on a hash, false for human redable collections
cygnusagent.sinks.mongo-sink.should_hash = false
# ============================================
# mongo-channel configuration
# channel type (must not be changed)
cygnusagent.channels.mongo-channel.type = memory
# capacity of the channel
cygnusagent.channels.mongo-channel.capacity = 1000
# amount of bytes that can be sent per transaction
cygnusagent.channels.mongo-channel.transactionCapacity = 100
Here is my rule :
{
"grouping_rules": [
{
"id": 1,
"fields": [
"button"
],
"regex": ".*",
"destination": "kura",
"fiware_service_path": "/kuraspath"
}
]
}
Any ideas of what I have missed? Thanks in advance for your help!
This configuration parameter is wrong:
cygnusagent.sinks = OrionMongoSink
According to your configuration, it must be mongo-sink (I mean, you are configuring a Mongo sink named mongo-sink when you configure lines such as cygnusagent.sinks.mongo-sink.type).
In addition, I would recommend you to not using the grouping rules feature; it is an advanced feature about sending the data to a collection different than the default one, and in a first stage I would play with the default behaviour. Thus, my recommendation is to leave the path to the file in cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file, but comment all the JSON within it :)