Typical ormconfig.json file for Google Cloud SQL? - google-cloud-sql

I have been trying for hours. What should be the ormconfig.json file for Google Cloud SQL working with TypeORM? I managed to get it working with the IP of the DB locally (with mysql workbench and Google cloud proxy and whitelisting my ip) but I don't know what the connection details should be for app engine.
{
"name": "default",
"type": "mysql",
"host": "/cloudsql/[project:region:instance]",
"port": "3306",
"username": "root",
"password": "xxxx",
"database": "yyy",
"synchronize": true,
"logging": false,
"entities": [
"modules/**/*.entity.js"
]
}
or
{
"name": "default",
"type": "mysql",
"extra": {
"socketPath": "/cloudsql/[project:region:instance]"
},
"username": "root",
"password": "xxxx",
"database": "yyy",
"synchronize": true,
"logging": false,
"entities": [
"modules/**/*.entity.js"
]
}
or anything else?
Thanks a lot!

For those interested, here is the solution:
{
"name": "default",
"type": "mysql",
"extra": {
"socketPath": "/cloudsql/[project:region:instance]"
},
"username": "root",
"password": "xxxx",
"database": "yyy",
"synchronize": true,
"logging": false,
"entities": [
"dist/**/*.entity.js"
]
}
Note that I also changed the entities path

It didn't worked for me until I added the "cloud_sql" path also the the "host":
{
"name": "default",
"host": "/cloudsql/[project:region:instance]",
"type": "mysql",
"extra": {
"socketPath": "/cloudsql/[project:region:instance]"
},
"username": "root",
"password": "xxxx",
"database": "yyy",
"synchronize": true,
"logging": false,
"entities": [
"dist/**/*.entity.js"
]
}

Related

Debezium MS SQL Server produces wrong JSON format not recognized by Flink

I have the following setting (verified using curl connector/connector-name):
"config": {
"connector.class": "io.debezium.connector.sqlserver.SqlServerConnector",
"database.user": "admin",
"database.dbname": "test",
"database.hostname": "mssql-host",
"database.password": "xxxxxxx",
"database.history.kafka.bootstrap.servers": "server:9092", "database.history.kafka.topic": "dbhistory.test", "value.converter.schemas.enable": "false",
"name": "mssql-cdc",
"database.server.name": "test",
"database.port": "1433",
"include.schema.changes": "false"
}
I was able to pull CDC events into Kafka topic. It is in following format:
{
"schema": {
"type": "struct",
"fields": [
{
"type": "struct",
"fields": [
{
"type": "int32",
"optional": false,
"field": "id"
},
{
"type": "string",
"optional": true,
"field": "value"
}
],
"optional": true,
"name": "test.dbo.tryme2.Value",
"field": "before"
},
{
"type": "struct",
"fields": [
{
"type": "int32",
"optional": false,
"field": "id"
},
{
"type": "string",
"optional": true,
"field": "value"
}
],
"optional": true,
"name": "test.dbo.tryme2.Value",
"field": "after"
},
{
"type": "struct",
"fields": [
{
"type": "string",
"optional": false,
"field": "version"
},
{
"type": "string",
"optional": false,
"field": "connector"
},
{
"type": "string",
"optional": false,
"field": "name"
},
{
"type": "int64",
"optional": false,
"field": "ts_ms"
},
{
"type": "string",
"optional": true,
"name": "io.debezium.data.Enum",
"version": 1,
"parameters": {
"allowed": "true,last,false,incremental"
},
"default": "false",
"field": "snapshot"
},
{
"type": "string",
"optional": false,
"field": "db"
},
{
"type": "string",
"optional": true,
"field": "sequence"
},
{
"type": "string",
"optional": false,
"field": "schema"
},
{
"type": "string",
"optional": false,
"field": "table"
},
{
"type": "string",
"optional": true,
"field": "change_lsn"
},
{
"type": "string",
"optional": true,
"field": "commit_lsn"
},
{
"type": "int64",
"optional": true,
"field": "event_serial_no"
}
],
"optional": false,
"name": "io.debezium.connector.sqlserver.Source",
"field": "source"
},
{
"type": "string",
"optional": false,
"field": "op"
},
{
"type": "int64",
"optional": true,
"field": "ts_ms"
},
{
"type": "struct",
"fields": [
{
"type": "string",
"optional": false,
"field": "id"
},
{
"type": "int64",
"optional": false,
"field": "total_order"
},
{
"type": "int64",
"optional": false,
"field": "data_collection_order"
}
],
"optional": true,
"field": "transaction"
}
],
"optional": false,
"name": "test.dbo.tryme2.Envelope"
},
"payload": {
"before": null,
"after": {
"id": 777,
"value": "xxxxxx"
},
"source": {
"version": "1.8.1.Final",
"connector": "sqlserver",
"name": "test",
"ts_ms": 1647169350996,
"snapshot": "true",
"db": "test",
"sequence": null,
"schema": "dbo",
"table": "tryme2",
"change_lsn": null,
"commit_lsn": "00000043:00000774:000a",
"event_serial_no": null
},
"op": "r",
"ts_ms": 1647169350997,
"transaction": null
}
}
In Flink, when I created a source table using the topic, I get:
Caused by: java.io.IOException: Corrupt Debezium JSON message
I already have "value.converter.schemas.enable": "false", why doesn't this work?
Just found out that the configuration was hierarchical, meaning your have to supply both value.converter and value.converter.schemas.enable to override Kafka Connect worker configuration at connector level.
I sincerely hope there were some sort of validation so I did not have to wonder for hours.
Also if schema is desired, there is a Flink configuration hidden in the doc:
In order to interpret such messages, you need to add the option 'debezium-json.schema-include' = 'true' into above DDL WITH clause (false by default). Usually, this is not recommended to include schema because this makes the messages very verbose and reduces parsing performance.
I have to say this is really bad developer experience.

org.apache.kafka.connect.transforms.ReplaceField does not work

The documentation I used: https://docs.confluent.io/platform/current/connect/transforms/replacefield.html
I use this connector to rename PersonId column to Id by using the org.apache.kafka.connect.transforms.ReplaceField and setting renames to PersonId:Id
{
"name": "SQL_Connector",
"config": {
"connector.class": "io.debezium.connector.sqlserver.SqlServerConnector",
"tasks.max": "1",
"database.hostname": "hostname",
"database.port": "1433",
"database.user": "user",
"database.password": "password",
"database.dbname": "sqlserver",
"database.server.name": "server",
"database.history.kafka.bootstrap.servers": "kafka:9092",
"database.history.kafka.topic": "dbhistory.test",
"transforms": "RenameField,addStaticField",
"transforms.RenameField.type": "org.apache.kafka.connect.transforms.ReplaceField$Value",
"transforms.RenameField.renames": "PersonId:Id",
"transforms.addStaticField.type":"org.apache.kafka.connect.transforms.InsertField$Value",
"transforms.addStaticField.static.field":"table",
"transforms.addStaticField.static.value":"changedtablename",
}
}
But when I get the value in the topic the field PersonId is not changed:
{
"schema": {
"type": "struct",
"fields": [
{
"type": "struct",
"fields": [
{
"type": "int32",
"optional": false,
"field": "PersonId"
}
],
"optional": true,
"name": "test.Value",
"field": "before"
},
{
"type": "struct",
"fields": [
{
"type": "int32",
"optional": false,
"field": "PersonId"
}
],
"optional": true,
"name": "test.Value",
"field": "after"
},
{
"type": "struct",
"fields": [
{
"type": "string",
"optional": false,
"field": "version"
},
{
"type": "string",
"optional": false,
"field": "connector"
},
{
"type": "string",
"optional": false,
"field": "name"
},
{
"type": "int64",
"optional": false,
"field": "ts_ms"
},
{
"type": "string",
"optional": true,
"name": "io.debezium.data.Enum",
"version": 1,
"parameters": {
"allowed": "true,last,false"
},
"default": "false",
"field": "snapshot"
},
{
"type": "string",
"optional": false,
"field": "db"
},
{
"type": "string",
"optional": false,
"field": "schema"
},
{
"type": "string",
"optional": false,
"field": "table"
},
{
"type": "string",
"optional": true,
"field": "change_lsn"
},
{
"type": "string",
"optional": true,
"field": "commit_lsn"
},
{
"type": "int64",
"optional": true,
"field": "event_serial_no"
}
],
"optional": false,
"name": "io.debezium.connector.sqlserver.Source",
"field": "source"
},
{
"type": "string",
"optional": false,
"field": "op"
},
{
"type": "int64",
"optional": true,
"field": "ts_ms"
},
{
"type": "string",
"optional": true,
"field": "table"
}
],
"optional": false,
"name": "test.Envelope"
},
"payload": {
"before": null,
"after": {
"PersonId": 1,
},
"source": {
"version": "1.0.3.Final",
"connector": "sqlserver",
"name": "test",
"ts_ms": 1627628793596,
"snapshot": "true",
"db": "test",
"schema": "dbo",
"table": "TestTable",
"change_lsn": null,
"commit_lsn": "00023472:00000100:0001",
"event_serial_no": null
},
"op": "r",
"ts_ms": 1627628793596,
"table": "changedtablename"
}
}
How do I change the Field?
You can only replace fields that are at the top-level of the Kafka Record, as the example in the doc shows.
That being said, you will need to extract the after field first

is there an way to transform key field values to lower case in debezium sql server source connector? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I want to transform SQL-Server column names to lower case while storing it in a Kafka topic. I am using debezium as my source connector
It can be done using Kafka Connect Common Transformations by Jeremy Custenborder
SQL Server table:
Id Name Description Weight Pro_Id
101 aaa Sample_Test 3.14 2020-02-21 13:32:06.5900000
102 eee testdata1 3.14 2020-02-21 13:32:06.5900000
Step 1: Download the kafka connect common transformations jar file by Jeremy Custenborder in confluent hub from this link
Step 2: place the jar file in /usr/share/java or /kafka/libs based on your kafka environment
Step 3: Create the debezium SQL-Server source connector
{
"name": "sqlserver_src_connector",
"config": {
"connector.class": "io.debezium.connector.sqlserver.SqlServerConnector",
"database.server.name": "sqlserver",
"database.hostname": "*.*.*.*",
"database.port": "1433",
"database.user": "username",
"database.password": "password",
"database.dbname": "db_name",
"table.whitelist": "dbo.tablename",
"transforms": "unwrap,changeCase",
"transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState",
"transforms.changeCase.type" : "com.github.jcustenborder.kafka.connect.transform.common.ChangeCase$Value",
"transforms.changeCase.from" : "UPPER_UNDERSCORE",
"transforms.changeCase.to" : "LOWER_UNDERSCORE",
"database.history.kafka.bootstrap.servers": "*.*.*.*",
"database.history.kafka.topic": "schema-changes-tablename"
}
}
Step 4: kafka topic data
{
"schema": {
"type": "struct",
"fields": [
{
"type": "int32",
"optional": false,
"field": "id"
},
{
"type": "string",
"optional": false,
"field": "name"
},
{
"type": "string",
"optional": true,
"field": "description"
},
{
"type": "double",
"optional": true,
"field": "weight"
},
{
"type": "int64",
"optional": false,
"name": "io.debezium.time.NanoTimestamp",
"version": 1,
"field": "pro_id"
}
],
"optional": true,
"name": "sqlserver.dbo.tablename"
},
"payload": {
"id": 101,
"name": "aaa",
"description": "Sample_Test",
"weight": 3.14,
"pro_id": 1582291926590000000
}
}
{
"schema": {
"type": "struct",
"fields": [
{
"type": "int32",
"optional": false,
"field": "id"
},
{
"type": "string",
"optional": false,
"field": "name"
},
{
"type": "string",
"optional": true,
"field": "description"
},
{
"type": "double",
"optional": true,
"field": "weight"
},
{
"type": "int64",
"optional": false,
"name": "io.debezium.time.NanoTimestamp",
"version": 1,
"field": "pro_id"
}
],
"optional": true,
"name": "sqlserver.dbo.tablename"
},
"payload": {
"id": 102,
"name": "eee",
"description": "testdata1",
"weight": 3.14,
"pro_id": 1582291926590000000
}
}
thanks for the help Jiri Pechanec and Chris Cranford #Naros from debezium community

Kafka mirrormaker jmxtrans configuration file format

I use jmxtrans to monitor kafka mirrormaker but my json format seems to have problems, the data can not be displayed
how to write this json file, thank
{
"servers": [{
"port": "5448",
"host": "10.10.21.10",
"queries": [{
"obj": "kafka.consumer:type=ConsumerTopicMetrics,name=BytesInPerSec,topic=mirror_group-*",
"attr": ["Count", "MeanRate", "OneMinuteRate"],
"resultAlias": "mirror_group-*",
"outputWriters": [{
"#class": "com.googlecode.jmxtrans.model.output.InfluxDbWriterFactory",
"url": "http://10.10.21.10:8086",
"username": "root",
"password": "123",
"database": "jmxDB",
"tags": {
"application": "BytesInPerSec"
}
}]
},
{
"obj": "kafka.consumer:type=ConsumerTopicMetrics,name=MessagesInPerSec,topic=mirror_group-*",
"attr": ["Count", "MeanRate", "OneMinuteRate"],
"resultAlias": "mirror_group-*",
"outputWriters": [{
"#class": "com.googlecode.jmxtrans.model.output.InfluxDbWriterFactory",
"url": "http://10.10.21.10:8086",
"username": "root",
"password": "123",
"database": "jmxDB",
"tags": {
"application": "MessagesInPerSec"
}
}]
},
{
"obj": "kafka.producer:type=ProducerRequestMetrics,name=ProducerRequestRateAndTimeMs,topic=mirror_group-*",
"attr": ["Count", "Max", "Min", "50thPercentile", "75thPercentile", "95thPercentile"],
"resultAlias": "mirror_group-*",
"outputWriters": [{
"#class": "com.googlecode.jmxtrans.model.output.InfluxDbWriterFactory",
"url": "http://10.10.21.10:8086",
"username": "root",
"password": "123",
"database": "jmxDB",
"tags": {
"application": "ProducerRequestRateAndTimeMs"
}
}]
},
{
"obj": "kafka.producer:type=ProducerRequestMetrics,name=ProducerRequestSize,topic=mirror_group-*",
"attr": ["Count", "Max", "Min", "50thPercentile", "75thPercentile", "95thPercentile"],
"resultAlias": "mirror_group-*",
"outputWriters": [{
"#class": "com.googlecode.jmxtrans.model.output.InfluxDbWriterFactory",
"url": "http://10.10.21.10:8086",
"username": "root",
"password": "123",
"database": "jmxDB",
"tags": {
"application": "ProducerRequestSize"
}
}]
}
]
}]
}
I can't get the data from grafana

Visual Studio Code SFTP to multiple servers

In PhpStorm, there is a way to configure multiple SFTP endpoints and chose which server you want to upload to. I'm looking for this functionality in Visual Studio Code. I have installed SFTP VS Code extension and I am able to configure it for one endpoint. What if I want to upload a file to multiple servers? How can I configure that? Or is there another extension that does that?
Hi you can add multiple ftp servers to config. Just The context must not be same.
[
{
"name": "server1",
"context": "/project/build",
"host": "host",
"username": "username",
"password": "password",
"remotePath": "/remote/project/build"
},
{
"name": "server2",
"context": "/project/src",
"host": "host",
"username": "username",
"password": "password",
"remotePath": "/remote/project/src"
}
]
You can use "profiles" with the SFTP extension now. https://github.com/liximomo/vscode-sftp#profiles
{
"name": "My Project",
"protocol": "sftp",
"remotePath": "/",
"port": 22,
"profiles": {
"dev": {
"host": "server1.example.com",
"username": "username",
"password": "password"
},
"prod": {
"host": "server2.example.com",
"username": "other-username",
"password": "other-password"
}
},
"defaultProfile": "dev"
}
You can set up remotePath for each of your profiles like that: (I saw that such question was asked by #Charlie Parker in the second answer to that question)
{
"name": "ExampleName",
"protocol": "sftp",
"port": 22,
"profiles": {
"profile1": {
"host": "connection1",
"username": "user1",
"remotePath":"/path1"
},
"profile2": {
"host": "connection2",
"username": "user2",
"remotePath":"/path2"
},
"profile3": {
"host": "connection3",
"username": "user3",
"remotePath":"/path3"
},
}
}
And with Ctrl+Sifht+P > Set Profile you can change your profile.
There are lot of add-ins that will work for your requirements. Below are a list of few
https://marketplace.visualstudio.com/items?itemName=mkloubert.vs-deploy
https://marketplace.visualstudio.com/items?itemName=mkloubert.vscode-deploy-reloaded
https://marketplace.visualstudio.com/items?itemName=humy2833.ftp-simple
Try this:
[
{
"name": "project 1",
"context": "/project/project1",
"host": "",
"username": "",
"password": "",
"protocol": "ftp",
"post": 21,
"remotePath": "/",
"uploadOnSave": true
},
{
"name": "project 2",
"context": "/project/project2",
"host": "",
"username": "",
"password": "",
"protocol": "ftp",
"post": 21,
"remotePath": "/",
"uploadOnSave": true
}
]
[
{
"name": "server1",
"context": "/project/build",
"host": "host",
"username": "username",
"password": "password",
"remotePath": "/remote/project/build"
},
{
"name": "server2",
"context": "/project/src",
"host": "host",
"username": "username",
"password": "password",
"remotePath": "/remote/project/src"
}
]