Vertx unnamed prepared statement does not exist(PgBouncer) - postgresql

When I attept to execute any request(through pg bouncer), then get exception
Caused by: io.vertx.pgclient.PgException: { "message": "unnamed prepared statement does not exist", "severity": "ERROR", "code": "26000", "file": "postgres.c", "line": "1620", "routine": "exec_bind_message" }
But if I exec directly to postgres(not through), everything OK

Related

Concourse Worker on another server loses connection to Concourse Web

We have a Concourse Web Container and a Concourse Worker Container running on Server A (212.77.7.255 - real IP is conceiled). We use the latest Concourse Version 7.8.1.
As we ran out of Worker resources, we added another Concourse Worker Container running on Server B. The Worker on Server B has been running fine for about five days, but all of a sudden it is not able to connect anymore to Concourse Web on Server A.
The logs of the Worker on Server B say:
{
"timestamp": "2022-07-12T11:15:59.542 985762Z",
"level": "error",
"source": "worker",
"message": "worker.container-sweeper.tick.failed-to-connect-to-tsa",
"data": {
"error": "dial tcp 212.77.7.255:2222: i/o timeout",
"session": "6.4"
}
}{
"timestamp": "2022-07-12T11:15:59.5430446562",
"level": "error",
"source": "worker",
"message": "worker.container-sweeper.tick.dial.failed-to-connect-to-any-tsa",
"data": {
"error": "all worker SSH gateways unreachable",
"session": "6.4.2"
}
}{
"timestamp": "2022-07-12T11:15:59.5430608042",
"level": "error",
"source": "worker",
"message": "worker.container-sweeper.tick.failed-to-dial",
"data": {
"error": "all worker SSH gateways unreachable",
"session": "6.4"
}
}{
"timestamp": "2022-07-12T11:15:59.5430689532",
"level": "error",
"source": "worker",
"message": "worker.container-sweeper.tick.failed-to-get-containers-to-destroy",
"data": {
"error": "all worker SSH gateways unreachable",
"session": "6.4"
}
}{
"timestamp": "2022-07-12T11:15:59.5541187512",
"level": "error",
"source": "worker",
"message": "worker.volume-sweeper. tick.failed-to-connect-to-tsa",
"data": {
"error": "dial tcp 212.77.7.255:2222: i/o timeout",
"session": "7.4"
}
}{
"timestamp": "2022-07-12T11:15:59.5541648442",
"level": "error",
"source": "worker",
"message": "worker.volume-sweeper.tick.dial.failed-to-connect-to-any-tsa",
"data": {
"error": "all worker SSH gateways unreachable",
"session": "7.4.3"
}
}{
"timestamp": "2022-07-12T11:15:59.5541725932",
"level": "error",
"source": "worker",
"message": "worker.volume-sweeper.tick.failed-to-dial",
"data": {
"error": "all worker SSH gateways unreachable",
"session": "7.4"
}
}{
"timestamp": "2022-07-12T11:15:59.554179789Z",
"level": "error",
"source": "worker",
"message": "worker.volume-sweeper. tick. failed-to-get-volume 3-to-destroy",
"data": {
"error": "all worker SSH gateways unreachable",
"session": "7.4"
}
}{
"timestamp": "2022-07-12T11:16:04.5802200122",
"level": "error",
"source": "worker",
"message": "worker.beacon-runner.beacon. failed-to-connect-to-tsa",
"data": {
"error": "dial tcp 212.77.7.255:2222: i/o timeout",
"session": "4.1"
}
}{
"timestamp": "2022-07-12T11:16:04.580284659Z",
"level": "error",
"source": "worker",
"message": "worker.beacon-runner.beacon.dial.failed-to-connect-to-any-tsa",
"data": {
"error": "all worker SSH gateways unreachable",
"session": "4.1.10"
}
}{
"timestamp": "2022-07-12T11:16:04.5803353772",
"level": "error",
"source": "worker",
"message": "worker.beacon-runner.beacon.failed-to-dial",
"data": {
"error": "all worker SSH gateways unreachable",
"session": "4.1"
}
}{
"timestamp": "2022-07-12T11:16:04.5803598682",
"level": "error",
"source": "worker",
"message": "worker.beacon-runner.beacon.exited-with-error",
"data": {
"error": "all worker SSH gateways unreachable",
"session": "4.1"
}
}{
"timestamp": "2022-07-12T11:16:04.580372552Z",
"level": "debug",
"source": "worker",
"message",
"worker.beacon-runner.beacon.done",
"data": {
"session": "4.1"
}
}{
"timestamp": "2022-07-12T11:16:04.5803948792",
"level": "error",
"source": "worker",
"message": "worker.beacon-runner.failed",
"data": {
"error": "all worker SSH gateways unreachable",
"session": "4"
}
}
The logs on Concourse Web on Server A show no entries of the Worker on Server B trying to connect. On Server B I'm able to connect to Concourse Web on Server A:
$ nc 212.77.7.255 2222
SSH-2.0-Go
We had this problem before, but we solved it by upgrading Concourse to the latest version 7.8.1. Now I'm running out of options where to debug this. What I've tried:
restarting the workers
restarting the web container
pruning the stalled worker of Server B
docker system prune on Server B
Nothing does help. What can I do to debug this further and make the Worker on Server B connect again?
You said it happened to an earlier version, you "ran out of Worker resources", and I'm seeing I/O timeout in the logs... the one component you didn't mention is the DB.
It might be that the max conns on the DB has been reached, especially if the DB is used for purposes other than just Concourse. That's where I'd look next.
We couldn't find out why the docker network did not allow connecting to Server A. As connections on the host machine were going through, we told docker to use the host network:
services:
concourse-worker:
...
network-mode: host
...
This solved the issue. Not a pretty workaround, as the docker container should have it's own separated network, but as there is nothing else running on this server it's fine.

Azure data factory ci/cd release errors

I am trying to deploy to the UAT environment and followed the needed steps shown in a blog post and youtube video.
However, I keep getting failures.
If I run it in 'validation only' mode it passes fine. But to actually deploy it under 'incremental' I receive the following errors
2022-03-01T17:24:27.6268601Z ##[error]At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.
2022-03-01T17:24:27.6301466Z ##[error]Details:
2022-03-01T17:24:27.6306699Z ##[error]BadRequest: Failed to encrypt sub-resource payload {
"Id": "/subscriptions/a834838a-11d5-4657-a9c3-bc8b2ebdaa59/resourceGroups/adf-rg-uat-uks/providers/Microsoft.DataFactory/factories/adf-df-uat-uks/linkedservices/adfSQLDB_Dev",
"Name": "adfSQLDB_Dev",
"Properties": {
"annotations": [],
"type": "AzureSqlDatabase",
"typeProperties": {
"connectionString": "********************"
}
}
} and error is: Message for the errorCode not found..
2022-03-01T17:24:27.6314634Z ##[error]BadRequest: Failed to encrypt sub-resource payload {
"Id": "/subscriptions/a834838a-11d5-4657-a9c3-bc8b2ebdaa59/resourceGroups/adf-rg-uat-uks/providers/Microsoft.DataFactory/factories/adf-df-uat-uks/linkedservices/adfSQLDB_Prod",
"Name": "adfSQLDB_Prod",
"Properties": {
"annotations": [],
"type": "AzureSqlDatabase",
"typeProperties": {
"connectionString": "********************"
}
}
} and error is: Message for the errorCode not found..
2022-03-01T17:24:27.6322501Z ##[error]BadRequest: Failed to encrypt sub-resource payload {
"Id": "/subscriptions/a834838a-11d5-4657-a9c3-bc8b2ebdaa59/resourceGroups/adf-rg-uat-uks/providers/Microsoft.DataFactory/factories/adf-df-uat-uks/linkedservices/adfBlobStorage",
"Name": "adfBlobStorage",
"Properties": {
"annotations": [],
"type": "AzureBlobStorage",
"typeProperties": {
"connectionString": "********************"
}
}
} and error is: Expecting connection string of format "key1=value1; key2=value2"..
2022-03-01T17:24:27.6328134Z ##[error]BadRequest: Failed to encrypt sub-resource payload {
"Id": "/subscriptions/a834838a-11d5-4657-a9c3-bc8b2ebdaa59/resourceGroups/adf-rg-uat-uks/providers/Microsoft.DataFactory/factories/adf-df-uat-uks/linkedservices/adfSQLDB",
"Name": "adfSQLDB",
"Properties": {
"annotations": [],
"type": "AzureSqlDatabase",
"typeProperties": {
"connectionString": "********************"
}
}
} and error is: Message for the errorCode not found..
2022-03-01T17:24:27.6336169Z ##[error]BadRequest: Failed to encrypt sub-resource payload {
"Id": "/subscriptions/a834838a-11d5-4657-a9c3-bc8b2ebdaa59/resourceGroups/adf-rg-uat-uks/providers/Microsoft.DataFactory/factories/adf-df-uat-uks/linkedservices/adfSQLDB_UAT",
"Name": "adfSQLDB_UAT",
"Properties": {
"annotations": [],
"type": "AzureSqlDatabase",
"typeProperties": {
"connectionString": "********************"
}
}
} and error is: Message for the errorCode not found..
2022-03-01T17:24:27.6339122Z ##[error]Check out the troubleshooting guide to see if your issue is addressed: https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-resource-group-deployment?view=azure-devops#troubleshooting
2022-03-01T17:24:27.6342204Z ##[error]Task failed while creating or updating the template deployment.
Regards
Mark
Are you using any Self Hosted Integration Runtime? If yes, then please check it is in available state during the deployment or your connection string may not have the Secure String property:
"typeProperties": {
"connectionString": {
"type": "SecureString",
"value": .............
}
}

Avro: org.apache.avro.AvroTypeException: Expected long. Got START_OBJECT

I am working on an Avro schema and trying to create a testing data to test it with Kafa, but when I produce the message got this error: "Caused by: org.apache.avro.AvroTypeException: Expected long. Got START_OBJECT"
The Schema I created is like this:
{
"name": "MyClass",
"type": "record",
"namespace": "com.acme.avro",
"doc":"This schema is for streaming information",
"fields":[
{"name":"batchId", "type": "long"},
{"name":"status", "type": {"type": "enum", "name": "PlannedTripRequestedStatus", "namespace":"com.acme.avro.Dtos", "symbols":["COMPLETED", "FAILED"]}},
{"name":"runRefId", "type": "int"},
{"name":"tripId", "type": ["null", "int"]},
{"name": "referenceNumber", "type": ["null", "string"]},
{"name":"errorMessage", "type": ["null", "string"]}
]
}
The testing data is like this:
{
"batchId": {
"long": 3
},
"status": "COMPLETED",
"runRefId": {
"int": 1000
},
"tripId": {
"int": 200
},
"referenceNumber": {
"string": "ReferenceNumber1111"
},
"errorMessage": {
"string": "Hello World"
}
}
However, when I registered this schema and try to produce a message with Confluent console tool, I got the error: org.apache.avro.AvroTypeException: Expected long. Got START_OBJECT The whole error message is like this:
org.apache.kafka.common.errors.SerializationException: Error deserializing {"batchId": ...} to Avro of schema {"type":...}" at io.confluent.kafka.formatter.AvroMessageReader.readFrom(AvroMessageReader.java:134)
at io.confluent.kafka.formatter.SchemaMessageReader.readMessage(SchemaMessageReader.java:325)
at kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:51)
at kafka.tools.ConsoleProducer.main(ConsoleProducer.scala)
Caused by: org.apache.avro.AvroTypeException: Expected long. Got START_OBJECT
at org.apache.avro.io.JsonDecoder.error(JsonDecoder.java:511)
at org.apache.avro.io.JsonDecoder.readLong(JsonDecoder.java:177)
at org.apache.avro.io.ResolvingDecoder.readLong(ResolvingDecoder.java:169)
at org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:197)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:160)
at org.apache.avro.generic.GenericDatumReader.readField(GenericDatumReader.java:259)
at org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:247)
at org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:179)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:160)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:153)
at io.confluent.kafka.schemaregistry.avro.AvroSchemaUtils.toObject(AvroSchemaUtils.java:213)
at io.confluent.kafka.formatter.AvroMessageReader.readFrom(AvroMessageReader.java:124)
Does any know what I did wrong with my schema or test data? Thank you so much!
You only need the type object if the type is unclear (union of string or number, for example), or its nullable.
For batchId and runRefId, just use simple values

PostgreSQL internal server 500

Iam getting a 500 internal server error with this message,
"message": "no pg_hba.conf entry for host "98.222.142.89", user "ykqkrdvlmsxktt", database "dp1g7eegdjct9ssl=true&sslfactory=org.postgresql.ssl.NonValidati", SSL off",
"error": {
"length": 216,
"name": "error",
"severity": "FATAL",
"code": "28000",
"file": "auth.c",
"line": "520",
"routine": "ClientAuthentication"
}
}

Getting exception when i try to copy data from Azure Blob to SQL Datawarehouse using DataFactory

I have created a pipeline that uses the Copy activity to move data from Blob to SQL Datawarehouse.
Azure Blob Dataset:
"name": "TradeData",
"properties": {
"type": "AzureBlob",
"linkedServiceName": "HDInsightStorageLinkedService",
"structure": [],
"typeProperties": {
"folderPath": "hdinsight/hive/warehouse/tradesummary/",
"format": {
"type": "OrcFormat"
}
},
SQL DW Dataset:
"name": "TradeDataRepository",
"properties": {
"type": "AzureSqlDWTable",
"linkedServiceName": "AzureSQLDataWarehouseLinkedService",
"typeProperties": {
"tableName": "tradesummary"
},
Pipeline:
"activities": [
{
"name": "CopyActivityTemplate",
"type": "Copy",
"inputs": [
{
"name": "TradeData"
}
],
"outputs": [
{
"name": "TradeDataRepository"
}
],
"typeProperties": {
"source": {
"type": "BlobSource",
"skipHeaderLineCount": 0
},
"sink": {
"type": "SqlDWSink",
"allowPolyBase": false
}
When I execute the pipeline, i get following error:
Database operation failed.
Error message from database execution : ErrorCode=FailedDbOperation,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Error happened when loading data into SQL Data Warehouse.,Source=Microsoft.DataTransfer.ClientLibrary,''Type=System.Data.SqlClient.SqlException,Message=110802;An internal DMS error occurred that caused this operation to fail. Details: Exception: Microsoft.SqlServer.DataWarehouse.DataMovement.Common.ExternalAccess.HdfsAccessException, Message: Java exception raised on call to HdfsBridge_CreateRecordReader: Error [HdfsBridge::CreateRecordReader - Unexpected error encountered creating the record reader.] occurred while accessing external file [/hive/warehouse/tradesummary/000000_0][0].,Source=.Net SqlClient Data Provider,SqlErrorNumber=110802,Class=16,ErrorCode=-2146232060,State=1,Errors=[{Class=16,Number=110802,State=1,Message=110802;An internal DMS error occurred that caused this operation to fail. Details: Exception: Microsoft.SqlServer.DataWarehouse.DataMovement.Common.ExternalAccess.HdfsAccessException, Message: Java exception raised on call to HdfsBridge_CreateRecordReader: Error [HdfsBridge::CreateRecordReader - Unexpected error encountered creating the record reader.] occurred while accessing external file [/hive/warehouse/tradesummary/000000_0][0].,},],'.
Any pointers would be appreciated.
The error message indicates that the problem is with your sink (SQL Data warehouse), not with ADF. When there is inadequate system memory in the resource pool, this might be a temporary issue.
To double-check, try activating and disabling the "Skip incompatible rows" option in the ADF copy activity and doing a few test runs to see whether you're seeing the problem regularly.
As stated by wBob as per the MSDN article there are other few reasons which could be when the source data includes a bit or a unique identification It appears that the bit problem has been resolved, but the unique identifier has not. Can you check that one of these datatypes exists in your source data. The GUIDs can be converted to varchar as a workaround.