Table names are encoded incorrectly - postgresql

I'm using Grails 2.4.3 on Tomcat 7.0 with PostgreSQL 9.4. I have a domain object called Iteration. If I run Grails without Tomcat, the iteration table is created. But when I try to run war inside Tomcat, ıteration table is created instead of iteration.
I did not set anything in Tomcat configuration files or the Tomcat Service to enable UTF-8 encoding.
What may cause to this problem to occur?
EDIT: Here is my production settings in DataSource.groovy:
production {
dataSource {
dbCreate = ""
url = "jdbc:postgresql://localhost:5432/db"
driverClassName = "org.postgresql.Driver"
username = "postgres"
password = "password"
dialect = "net.kaleidos.hibernate.PostgresqlExtensionsDialect"
logsql = false
properties {
jmxEnabled = true
initialSize = 5
maxActive = 50
minIdle = 5
maxIdle = 25
maxWait = 10000
maxAge = 10 * 60000
timeBetweenEvictionRunsMillis = 1800000
minEvictableIdleTimeMillis = 1800000
validationQuery = "SELECT 1"
validationQueryTimeout = 3
validationInterval = 15000
testOnBorrow = true
testWhileIdle = true
testOnReturn = false
jdbcInterceptors = "ConnectionState"
}
}
}

Related

AWS RDS for PostgreSQL Connection attempt timed out error

I created postgresql rds in aws with terraform. I'm checking from the aws console, everything seems normal. But I'm trying to connect to database with DBeaver but I can't connect. Likewise, I can't make the ssh connection for the ec2 I created, maybe there is a connection.
The terraform codes I wrote:
# postgres-db/main.tf
resource "aws_db_instance" "default" {
allocated_storage = 20
storage_type = "gp2"
engine = var.engine
engine_version = var.engine-version
instance_class = var.instance-class
db_name = var.db-name
identifier = var.identifier
username = var.username
password = var.password
port = var.port
publicly_accessible = var.publicly-accessible
db_subnet_group_name = var.db-subnet-group-name
parameter_group_name = var.parameter-group-name
vpc_security_group_ids = var.vpc-security-group-ids
apply_immediately = var.apply-immediately
skip_final_snapshot = true
}
module "service-db" {
source = "./postgres-db"
apply-immediately = true
db-name = var.service-db-name
db-subnet-group-name = data.terraform_remote_state.server.outputs.db_subnet_group
identifier = "${var.app-name}-db"
password = var.service-db-password
publicly-accessible = true # TODO: True for now, but should be false
username = var.service-db-username
vpc-security-group-ids = [data.terraform_remote_state.server.outputs.security_group_allow_internal_postgres]
}
resource "aws_security_group" "allow_internal_postgres" {
name = "allow-internal-postgres"
description = "Allow internal Postgres traffic"
vpc_id = aws_vpc.vpc.id
ingress {
from_port = 5432
to_port = 5432
protocol = "tcp"
cidr_blocks = [aws_vpc.vpc.cidr_block, "0.0.0.0/0"] # TODO: Remove public IP
}
}
In the research I did, it was written things like edit the security rules or set it to public, it seems like that anyway.
Security group inbound rules
Public accessible
How can I solve this problem can you please help?
I solved my problem by setting the subnet group to public.
module "service-db" {
source = "./postgres-db"
apply-immediately = true
db-name = var.service-db-name
db-subnet-group-name = data.terraform_remote_state.server.outputs.db_subnet_group_public
identifier = "${var.app-name}-db"
password = var.service-db-password
publicly-accessible = true # TODO: True for now, but should be false
username = var.service-db-username
vpc-security-group-ids = [data.terraform_remote_state.server.outputs.security_group_allow_internal_postgres]
}
resource "aws_db_subnet_group" "private" {
name = "${var.server_name}-db-subnet-group-private"
subnet_ids = aws_subnet.private.*.id
tags = {
Name = "${var.server_name} DB Subnet Group Private"
}
}
resource "aws_db_subnet_group" "public" {
name = "${var.server_name}-db-subnet-group-public"
subnet_ids = aws_subnet.public.*.id
tags = {
Name = "${var.server_name} DB Subnet Group Public"
}
}

How to use the useBulkCopyForBatchInsert on the JdbcSinkConnector?

I’m trying to bulk insert to the mssql db table by adding “useBulkCopyForBatchInsert=true” to the connection.url option of Jdbcsinkconnector as below.
"connection.url": "jdbc:sqlserver://...:1433;database=****;useBulkCopyForBatchInsert=true"
But data is not being inserted using bulk insert.
I will attach the connect log and reference document.
Using bulk copy API for batch insert operation
https://learn.microsoft.com/en-us/sql/connect/jdbc/use-bulk-copy-api-batch-insert-operation?view=sql-server-ver16
Connect Log
[2022-07-18 16:46:32,224] INFO JdbcSinkConfig values:
auto.create = false
auto.evolve = false
batch.size = 3000
connection.attempts = 3
connection.backoff.ms = 10000
connection.password = [hidden]
connection.url = jdbc:sqlserver://...:1433;database=****;useBulkCopyForBatchInsert=true
connection.user = ****
db.timezone = Asia/Seoul
delete.enabled = false
dialect.name =
fields.whitelist = []
insert.mode = insert
max.retries = 10
pk.fields = []
pk.mode = none
quote.sql.identifiers = ALWAYS
retry.backoff.ms = 3000
table.name.format = ****
table.types = [TABLE]
(io.confluent.connect.jdbc.sink.JdbcSinkConfig:361)
I'm not sure if this is the issue, but you're using the wrong JDBC URL for SQL Server. You should use jdbc:sqlserver://...:1433;databaseName=; instead of jdbc:sqlserver://...:1433;database=;.

Problems running Cygnus with Postgresql

So I have installed Cygnus and in the simple test configuration case which I took from here (https://github.com/telefonicaid/fiware-cygnus/blob/master/cygnus-ngsi/README.md) everything works fine.
But I need Postgresql as a backend for my application.
For this I adjusted the agent_1.conf file with all postgresql parameters found from http://fiware-cygnus.readthedocs.io/en/latest/cygnus-ngsi/installation_and_administration_guide/ngsi_agent_conf/
cygnus-ngsi.sources = http-source
cygnus-ngsi.sinks = postgresql-sink
cygnus-ngsi.channels = postgresql-channel
cygnus-ngsi.sources.http-source.channels = hdfs-channel mysql-channel ckan-channel mongo-channel sth-channel kafka-channel dynamo-channel postgresql-channel
cygnus-ngsi.sources.http-source.type = org.apache.flume.source.http.HTTPSource
cygnus-ngsi.sources.http-source.port = 5050
cygnus-ngsi.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.NGSIRestHandler
cygnus-ngsi.sources.http-source.handler.notification_target = /notify
cygnus-ngsi.sources.http-source.handler.default_service = default
cygnus-ngsi.sources.http-source.handler.default_service_path = /
cygnus-ngsi.sources.http-source.interceptors = ts gi
cygnus-ngsi.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.NGSIGroupingInterceptor$Builder
cygnus-ngsi.sources.http-source.interceptors.gi.grouping_rules_conf_file = /usr/cygnus/conf/grouping_rules.conf
cygnus-ngsi.sinks.postgresql-sink.channel = postgresql-channel
cygnus-ngsi.sinks.postgresql-sink.type = com.telefonica.iot.cygnus.sinks.NGSIPostgreSQLSink
cygnus-ngsi.sinks.postgresql-sink.postgresql_host = 127.0.0.1
cygnus-ngsi.sinks.postgresql-sink.postgresql_port = 5432
cygnus-ngsi.sinks.postgresql-sink.postgresql_database = myUser
cygnus-ngsi.sinks.postgresql-sink.postgresql_username = mydb
cygnus-ngsi.sinks.postgresql-sink.postgresql_password = xxxx
cygnus-ngsi.sinks.postgresql-sink.attr_persistence = row
cygnus-ngsi.sinks.postgresql-sink.batch_size = 100
cygnus-ngsi.sinks.postgresql-sink.batch_timeout = 30
cygnus-ngsi.sinks.postgresql-sink.batch_ttl = 10
# postgresql-channel configuration
cygnus-ngsi.channels.postgresql-channel.type = memory
cygnus-ngsi.channels.postgresql-channel.capacity = 1000
cygnus-ngsi.channels.postgresql-channel.transactionCapacity = 100
I didn'r really find any information about other files I am supposed to change and aren't really sure if all parameters are correct.
I also tried the sample configuration from here http://fiware-cygnus.readthedocs.io/en/latest/cygnus-ngsi/flume_extensions_catalogue/ngsi_postgresql_sink/index.html
Cygnus seems to start correctly but all if I try to send a notification I get connection refused
Befor doing anything, please, create the database, the user and the password in Postgresql.
The cygnus configuration is like this one. The /etc/cygnus/conf/cygnus_instance_1.conf file:
CYGNUS_USER=cygnus
CONFIG_FOLDER=/usr/cygnus/conf
CONFIG_FILE=/usr/cygnus/conf/agent_1.conf
AGENT_NAME=cygnus-ngsi
LOGFILE_NAME=cygnus.log
ADMIN_PORT=8081
POLLING_INTERVAL=30
So, the other file /usr/cygnus/conf/agent_1.conf is like this one (please change PostgreSQL parameters):
cygnus-ngsi.sources = http-source
cygnus-ngsi.sinks = postgresql-sink
cygnus-ngsi.channels = postgresql-channel
cygnus-ngsi.sources.http-source.channels = postgresql-channel
cygnus-ngsi.sources.http-source.type = org.apache.flume.source.http.HTTPSource
cygnus-ngsi.sources.http-source.port = 5050
cygnus-ngsi.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.NGSIRestHandler
cygnus-ngsi.sources.http-source.handler.notification_target = /notify
cygnus-ngsi.sources.http-source.handler.default_service = default
cygnus-ngsi.sources.http-source.handler.default_service_path = /
cygnus-ngsi.sources.http-source.handler.events_ttl = 10
cygnus-ngsi.sources.http-source.interceptors = ts gi
cygnus-ngsi.sources.http-source.interceptors.ts.type = timestamp
cygnus-ngsi.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.NGSIGroupingInterceptor$Builder
#cygnus-ngsi.sources.http-source.interceptors.gi.grouping_rules_conf_file = /usr/cygnus/conf/grouping_rules.conf
# =============================================
# postgresql-channel configuration
# channel type (must not be changed)
cygnus-ngsi.channels.postgresql-channel.type = memory
# capacity of the channel
cygnus-ngsi.channels.postgresql-channel.capacity = 1000
# amount of bytes that can be sent per transaction
cygnus-ngsi.channels.postgresql-channel.transactionCapacity = 100
# ============================================
# NGSIPostgreSQLSink configuration
# channel name from where to read notification events
cygnus-ngsi.sinks.postgresql-sink.channel = postgresql-channel
# sink class, must not be changed
cygnus-ngsi.sinks.postgresql-sink.type = com.telefonica.iot.cygnus.sinks.NGSIPostgreSQLSink
# true applies the new encoding, false applies the old encoding.
# cygnus-ngsi.sinks.postgresql-sink.enable_encoding = false
# true if the grouping feature is enabled for this sink, false otherwise
cygnus-ngsi.sinks.postgresql-sink.enable_grouping = false
# true if name mappings are enabled for this sink, false otherwise
cygnus-ngsi.sinks.postgresql-sink.enable_name_mappings = false
# true if lower case is wanted to forced in all the element names, false otherwise
# cygnus-ngsi.sinks.postgresql-sink.enable_lowercase = false
# the FQDN/IP address where the PostgreSQL server runs
cygnus-ngsi.sinks.postgresql-sink.postgresql_host = 127.0.0.1
# the port where the PostgreSQL server listens for incomming connections
cygnus-ngsi.sinks.postgresql-sink.postgresql_port = 5432
# the name of the postgresql database
cygnus-ngsi.sinks.postgresql-sink.postgresql_database = cygnusdb
# a valid user in the PostgreSQL server
cygnus-ngsi.sinks.postgresql-sink.postgresql_username = cygnus
# password for the user above
cygnus-ngsi.sinks.postgresql-sink.postgresql_password = cygnusdb
# how the attributes are stored, either per row either per column (row, column)
cygnus-ngsi.sinks.postgresql-sink.attr_persistence = row
# select the data_model: dm-by-service-path or dm-by-entity
cygnus-ngsi.sinks.postgresql-sink.data_model = dm-by-entity
# number of notifications to be included within a processing batch
cygnus-ngsi.sinks.postgresql-sink.batch_size = 1
# timeout for batch accumulation
cygnus-ngsi.sinks.postgresql-sink.batch_timeout = 30
# number of retries upon persistence error
cygnus-ngsi.sinks.postgresql-sink.batch_ttl = 0
# true enables cache, false disables cache
cygnus-ngsi.sinks.postgresql-sink.backend.enable_cache = true

How to remove the following character "/" from service path

Good Morning!
Currently I have set up my structure in Fiware saving my historical records in MongoDB, for this I have been using Mlab as a hosting.
I attache the configuration file of my agent, the problem comes in that due to the mandatory character "/" of the service path I can not access the generated historical data, since it is a character not allowed for collections in MongoDB.
agent_1.conf
cygnus-ngsi.sources = http-source
cygnus-ngsi.sinks = mongo-sink
cygnus-ngsi.channels = mongo-channel
cygnus-ngsi.sources.http-source.channels = mongo-channel
cygnus-ngsi.sources.http-source.type = org.apache.flume.source.http.HTTPSource
cygnus-ngsi.sources.http-source.port = 5050
cygnus-ngsi.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.NGSIRestHandler
cygnus-ngsi.sources.http-source.handler.notification_target = /notify
cygnus-ngsi.sources.http-source.handler.default_service = default
cygnus-ngsi.sources.http-source.handler.default_service_path = /sevilla
cygnus-ngsi.sources.http-source.handler.events_ttl = 2
cygnus-ngsi.sources.http-source.interceptors = ts
cygnus-ngsi.sources.http-source.interceptors.ts.type = timestamp
cygnus-ngsi.sinks.mongo-sink.type = com.telefonica.iot.cygnus.sinks.NGSIMongoSink
cygnus-ngsi.sinks.mongo-sink.channel = mongo-channel
cygnus-ngsi.sinks.mongo-sink.enable_encoding = false
cygnus-ngsi.sinks.mongo-sink.enable_grouping = false
cygnus-ngsi.sinks.mongo-sink.enable_name_mappings = false
cygnus-ngsi.sinks.mongo-sink.enable_lowercase = false
cygnus-ngsi.sinks.mongo-sink.data_model = dm-by-service-path
cygnus-ngsi.sinks.mongo-sink.attr_persistence = row
cygnus-ngsi.sinks.mongo-sink.mongo_hosts = ds******.mlab.com:35866
cygnus-ngsi.sinks.mongo-sink.mongo_username = my_user
cygnus-ngsi.sinks.mongo-sink.mongo_password = ********
cygnus-ngsi.sinks.mongo-sink.db_prefix = sth_
cygnus-ngsi.sinks.mongo-sink.collection_prefix = sth_
cygnus-ngsi.sinks.mongo-sink.batch_size = 1
cygnus-ngsi.sinks.mongo-sink.batch_timeout = 30
cygnus-ngsi.sinks.mongo-sink.batch_ttl = 10
cygnus-ngsi.sinks.mongo-sink.data_expiration = 0
cygnus-ngsi.sinks.mongo-sink.collections_size = 0
cygnus-ngsi.sinks.mongo-sink.max_documents = 0
cygnus-ngsi.sinks.mongo-sink.ignore_white_spaces = true
cygnus-ngsi.channels.mongo-channel.type = com.telefonica.iot.cygnus.channels.CygnusMemoryChannel
cygnus-ngsi.channels.mongo-channel.capacity = 1000
cygnus-ngsi.channels.mongo-channel.transactionCapacity = 100
Is there any way for Cygnus to remove the "/" character from the service path?
Error: http://www.subirimagenes.com/imagedata.php?url=http://s2.subirimagenes.com/imagen/9827048captura-de-pantalla.png
SOLUTION: You just have to change the enconding to true in the agent configuration
cygnus-ngsi.sinks.mongo-sink.enable_encoding = true
Thank you very much!

Quartz : There is no DataSource named 'myDS'

When i use datasource name as "quartzDS" everything is working fine, but when i change datasource name to any other name any other like "myDS". i get error.
Caused by: java.sql.SQLException: There is no DataSource named 'myDS'
My quartz.properties file.
org.quartz.scheduler.instanceName = QuartzClusterScheduler
org.quartz.scheduler.instanceId = AUTO
org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount = 100
org.quartz.threadPool.threadPriority = 8
org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.StdJDBCDelegate
org.quartz.jobStore.useProperties = false
org.quartz.jobStore.dataSource = myDS
org.quartz.jobStore.misfireThreshold = 60000
org.quartz.jobStore.tablePrefix = QRTZ_
org.quartz.jobStore.isClustered = true
org.quartz.jobStore.clusterCheckinInterval = 5000
org.quartz.dataSource.quartzDS.jndiURL = java:jboss/myDS
Resolved, Changed from
org.quartz.dataSource.quartzDS.jndiURL = java:jboss/myDS
to
org.quartz.dataSource.myDS.jndiURL = java:jboss/myDS