How can I configure CORS to accept headers and everything in Play when I connect to it with angular7.
I saw documentation but it doesn't t help me that much.
Is it only configured in app.conf?
play.filters.cors {
pathPrefixes = ["/"]
allowedOrigins = null
allowedHttpMethods = null
allowedHttpHeaders = null
exposedHeaders = ["Access-Control-Allow-Origin"]
preflightMaxAge = 3 days
}
Related
I’m trying to bulk insert to the mssql db table by adding “useBulkCopyForBatchInsert=true” to the connection.url option of Jdbcsinkconnector as below.
"connection.url": "jdbc:sqlserver://...:1433;database=****;useBulkCopyForBatchInsert=true"
But data is not being inserted using bulk insert.
I will attach the connect log and reference document.
Using bulk copy API for batch insert operation
https://learn.microsoft.com/en-us/sql/connect/jdbc/use-bulk-copy-api-batch-insert-operation?view=sql-server-ver16
Connect Log
[2022-07-18 16:46:32,224] INFO JdbcSinkConfig values:
auto.create = false
auto.evolve = false
batch.size = 3000
connection.attempts = 3
connection.backoff.ms = 10000
connection.password = [hidden]
connection.url = jdbc:sqlserver://...:1433;database=****;useBulkCopyForBatchInsert=true
connection.user = ****
db.timezone = Asia/Seoul
delete.enabled = false
dialect.name =
fields.whitelist = []
insert.mode = insert
max.retries = 10
pk.fields = []
pk.mode = none
quote.sql.identifiers = ALWAYS
retry.backoff.ms = 3000
table.name.format = ****
table.types = [TABLE]
(io.confluent.connect.jdbc.sink.JdbcSinkConfig:361)
I'm not sure if this is the issue, but you're using the wrong JDBC URL for SQL Server. You should use jdbc:sqlserver://...:1433;databaseName=; instead of jdbc:sqlserver://...:1433;database=;.
I am trying to integrate mongoDB and elasticsearch using monstache but I am facing this error. Please help me solve it out.
I will response with all the output you want.
config.toml file
mongo-url = "mongodb+srv://prince:mypassword#cluster0.mp297.mongodb.net/?retryWrites=true&w=majority"
elasticsearch-urls = ["http://127.0.0.1:9200"]
elasticsearch-max-conns = 10
replay = false
resume = true
enable-oplog = true
resume-name = "default"
namespace-regex = '^Satellite\.posts$'
direct-read-namespaces = ["Satellite.posts"]
change-stream-namespaces = ["Satellite."]
index-as-update = true
verbose = true
exit-after-direct-reads = false
[[mapping]]
namespace = "Satellite.posts"
index = "satellite"
I'm trying to spin up an Aurora Postgres Cluster and I can't seem to make it available over the internet. I'm using Terraform to code the infrastructure.
I've created a security group to allow external access and that is attached to the VPC's subnets used by the Cluster. Still, I can't seem to be able to access the endpoints from my local machine.
I can't figured out what I'm missing.
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = ">=3.11.0"
name = "vpc-auroradb-${var.environment}"
cidr = var.vpc_cidr_block
azs = var.availability_zones
private_subnets = var.vpc_private_subnets
public_subnets = var.vpc_public_subnets
database_subnets = var.vpc_database_subnets
enable_nat_gateway = true
enable_dns_hostnames = true
enable_dns_support = true
create_igw = true
create_database_internet_gateway_route = true
create_database_nat_gateway_route = true
create_database_subnet_group = true
create_database_subnet_route_table = true
}
module "aurora_cluster" {
source = "terraform-aws-modules/rds-aurora/aws"
version = ">=6.1.3"
name = "bambi-${var.environment}"
engine = "aurora-postgresql"
engine_version = "12.8"
instance_class = "db.t4g.large"
publicly_accessible = true
instances = {
1 = {
identifier = "bambi-1"
}
2 = {
identifier = "bambi-2"
}
}
autoscaling_enabled = true
autoscaling_min_capacity = 2
autoscaling_max_capacity = 3
vpc_id = module.vpc.vpc_id
db_subnet_group_name = module.vpc.database_subnet_group_name
create_db_subnet_group = false
create_security_group = false
iam_database_authentication_enabled = true
storage_encrypted = true
apply_immediately = true
monitoring_interval = 30
db_parameter_group_name = aws_db_parameter_group.parameter_group.id
db_cluster_parameter_group_name = aws_rds_cluster_parameter_group.parameter_group.id
vpc_security_group_ids = [aws_security_group.sg_public.id]
enabled_cloudwatch_logs_exports = ["postgresql"]
}
resource "aws_security_group" "sg_public" {
vpc_id = module.vpc.vpc_id
ingress {
from_port = 5432
to_port = 5432
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] # Allowing traffic in from all sources
}
egress {
from_port = 0 # Allowing any incoming port
to_port = 0 # Allowing any outgoing port
protocol = "-1" # Allowing any outgoing protocol
cidr_blocks = ["0.0.0.0/0"] # Allowing traffic out to all IP addresses
}
}
From the documentation of the used VPC module, in order to have public access for the database, you need the following:
create_database_subnet_group = true
create_database_subnet_route_table = true
create_database_internet_gateway_route = true
enable_dns_hostnames = true
enable_dns_support = true
create_database_nat_gateway_route should not be true. If we take a look at the code for the module on github:
resource "aws_route" "database_internet_gateway" {
count = var.create_vpc && var.create_igw && var.create_database_subnet_route_table && length(var.database_subnets) > 0 && var.create_database_internet_gateway_route && false == var.create_database_nat_gateway_route ? 1 : 0
route_table_id = aws_route_table.database[0].id
destination_cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.this[0].id
timeouts {
create = "5m"
}
}
We can see that the count for the route for the internet gateway will be 0. This means the route which would allow public internet access is not created for the database subnet.
In the other hand setting create_database_internet_gateway_route to true will block access through the NAT gateway as well, since the route table wont have the correct route.
resource "aws_route" "database_nat_gateway" {
count = var.create_vpc && var.create_database_subnet_route_table && length(var.database_subnets) > 0 && false == var.create_database_internet_gateway_route && var.create_database_nat_gateway_route && var.enable_nat_gateway ? var.single_nat_gateway ? 1 : length(var.database_subnets) : 0
route_table_id = element(aws_route_table.database.*.id, count.index)
destination_cidr_block = "0.0.0.0/0"
nat_gateway_id = element(aws_nat_gateway.this.*.id, count.index)
timeouts {
create = "5m"
}
}
Essentially you block all the traffic by setting both variables to true.
I'm trying to configure a standalone producer to read csv files from a sftp server and send data to a topic on the cloud.
So far I succeeded in reading my csv data from the file and parsing it according to my value.schema.
But now instead of using a fixed configuration schema, I'd like to use the schema registry. So I configured an AVRO schema for my test topic on the confluent cloud, generated the API key/secret and updated my config files.
I can see that connection is working fine, no authentication errors, via cli I can access the test schema, but when I try to run the producer I get the following error:
[2021-09-20 16:39:53,442] INFO SftpCsvSourceConnectorConfig values:
batch.size = 1000
behavior.on.error = IGNORE
cleanup.policy = NONE
csv.case.sensitive.field.names = false
csv.escape.char = 92
csv.file.charset = UTF-8
csv.first.row.as.header = false
csv.ignore.leading.whitespace = true
csv.ignore.quotations = false
csv.keep.carriage.return = false
csv.null.field.indicator = NEITHER
csv.quote.char = 34
csv.rfc.4180.parser.enabled = false
csv.separator.char = 44
csv.skip.lines = 0
csv.strict.quotes = false
csv.verify.reader = true
empty.poll.wait.ms = 250
error.path = /home/alberto/opt/confluent-6.2.0/sftp2/error
file.minimum.age.ms = 0
finished.path = /home/alberto/opt/confluent-6.2.0/sftp2/finished
input.file.pattern = .*.csv
input.path = /home/alberto/opt/confluent-6.2.0/sftp2/data
kafka.topic = testSchema
kerberos.keytab.path =
kerberos.user.principal =
key.schema = {"name" : "com.example.users.UserKey","type" : "STRUCT","isOptional" : true,"fieldSchemas" : {"material" : {"type" : "STRING","isOptional" : true}}}
parser.timestamp.date.formats = [yyyy-MM-dd'T'HH:mm:ss, yyyy-MM-dd' 'HH:mm:ss]
parser.timestamp.timezone = UTC
processing.file.extension = .PROCESSING
proxy.password = [hidden]
proxy.username =
schema.generation.enabled = false
schema.generation.key.fields = []
schema.generation.key.name = defaultkeyschemaname
schema.generation.value.name = defaultvalueschemaname
sftp.host = 192.168.1.6
sftp.password = [hidden]
sftp.port = 22
sftp.proxy.url =
sftp.username = user
timestamp.field =
timestamp.mode = PROCESS_TIME
tls.passphrase = [hidden]
tls.pemfile =
tls.private.key = [hidden]
tls.public.key = [hidden]
value.schema =
...
Caused by: org.apache.kafka.common.config.ConfigException: Both configs key.schema and value.schema must be set if schema.generation.enabled is false, but key.schema was not null and value.schema was null.
at io.confluent.connect.sftp.source.SftpSourceConnectorConfig.validateSchema(SftpSourceConnectorConfig.java:181)
at io.confluent.connect.sftp.source.SftpSourceConnectorConfig.<init>(SftpSourceConnectorConfig.java:121)
at io.confluent.connect.sftp.source.SftpCsvSourceConnectorConfig.<init>(SftpCsvSourceConnectorConfig.java:157)
at io.confluent.connect.sftp.SftpCsvSourceConnector.start(SftpCsvSourceConnector.java:44)
at org.apache.kafka.connect.runtime.WorkerConnector.doStart(WorkerConnector.java:184)
at org.apache.kafka.connect.runtime.WorkerConnector.start(WorkerConnector.java:209)
at org.apache.kafka.connect.runtime.WorkerConnector.doTransitionTo(WorkerConnector.java:348)
at org.apache.kafka.connect.runtime.WorkerConnector.doTransitionTo(WorkerConnector.java:331)
... 7 more
If I set schema.generation.enabled to true, it seems that it creates an empty schema:
value.schema = {"type":"STRUCT","isOptional":false,"fieldSchemas":{}}
and then I get:
org.apache.kafka.common.config.ConfigException: Failed to access Avro data from topic testSchema : Schema being registered is incompatible with an earlier schema for subject "testSchema-value"; error code: 409; error code: 409
as if it's trying to register a schema, except that it's not what I want, I just need to fetch the schema from the registry and use it.
If anyone need any addition information regarding the configuration I'll happy to provide.
I've just updated Liquidsoap to 1.3.0 and now get_process_lines does not return anything.
def get_request() =
# Get the URI
lines = get_process_lines("curl http://localhost:3000/api/v1/liquidsoap/next/my-radio")
log("liquidsoap curl returns #{lines}")
uri = list.hd(default="",lines)
log("liquidsoap will try and play #{uri}")
# Create a request
request.create(uri)
end
I read on the CHANGELOG
- Moved get_process_lines and get_process_output to utils.liq, added optional env parameter
Does it mean I have to do something to use utils.liq in my script now ?
The full script is as follows
set("log.file",false)
set("log.stdout",true)
set("log.level",3)
def apply_metadata(m) =
title = m["title"]
artist = m["artist"]
log("Now playing: #{title} by #{artist}")
end
# Our custom request function
def get_request() =
# Get the URI
lines = get_process_lines("curl http://localhost:3000/api/v1/liquidsoap/next/my-radio")
log("liquidsoap curl returns #{lines}")
uri = list.hd(default="",lines)
log("liquidsoap will try and play #{uri}")
# Create a request
request.create(uri)
end
def my_safe(s) =
security = sine()
fallback(track_sensitive=false,[s,security])
end
s = request.dynamic(id="s",get_request)
s = on_metadata(apply_metadata,s)
s = crossfade(s)
s = my_safe(s)
# We output the stream to an icecast
# server, in ogg/vorbis format.
log("liquidsoap starting")
output.icecast(
%mp3(id3v2=true,bitrate=128,samplerate=44100),
host = "localhost",
port = 8000,
password = "PASSWORD",
mount = "myradio",
genre="various",
url="http://www.myradio.fr",
description="My Radio",
s
)
Of course the API is working
$ curl http://localhost:3000/api/v1/liquidsoap/next/my-radio
annotate:title="Chamakay",artist="Blood Orange",album="Cupid Deluxe":http://localhost/stream/3.mp3
A more simple example :
lines = get_process_lines("echo hi")
log("lines = #{lines}")
line = list.hd(default="",lines)
log("line = #{line}")
returns the following logs
2017/05/05 15:24:42 [lang:3] lines = []
2017/05/05 15:24:42 [lang:3] line =
Many thanks in advance for your help !
geoffroy
The issue was fixed in liquidsoap 1.3.1
Fixed:
Fixed run_process, get_process_lines, get_process_output when compiling with OCaml <= 4.03 (#437, #439)
https://github.com/savonet/liquidsoap/blob/1.3.1/CHANGES#L12