is there a way to limit kafka connect heap space when debezium connector is fetching data from your sql server - apache-kafka

i am trying to set up a connector that fetches data from an SQL server to use with apache kafka. I've set up all of the kafka services with a docker-compose file, however the SQL server is on another server.
This is the configuration of my debezium connector in ksqldb:
create source connector sql_connector with
('connector.class'='io.debezium.connector.sqlserver.SqlServerConnector',
'database.server.name'='sqlserver',
'database.hostname'= 'xxxxx',
'database.port'='1433',
'database.user'= 'xxxx',
'database.password'= 'xxxxxx',
'database.dbname'='xxxxxxxxx',
'database.history.kafka.bootstrap.servers'='broker:29092',
'database.history.kafka.topic'='dbz_dbhistory.sqlserver.asgard-01');
When i do this, i get a response that the connector is succesfully created however when i query ksqldb by using 'show connectors' for my connectors i get the following error message:
io.confluent.ksql.util.KsqlServerException: org.apache.http.conn.HttpHostConnectException: Connect to connect:8083 [connect/172.18.0.6] failed: Connection refused (Connection refused)
Caused by: org.apache.http.conn.HttpHostConnectException: Connect to
connect:8083 [connect/172.18.0.6] failed: Connection refused (Connection
refused)
Caused by: Could not connect to the server.
Caused by: Could not connect to the server.
When i inspect my kafka connect logs i can see that its issueing select statements to the server but after a while i get the following error and my kafka connect shuts down:
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000b5400000, 118489088, 0) failed; error='Not enough space' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 118489088 bytes for committing reserved memory.
# An error report file with more information is saved as:
any ideas on how to fix this? Other then just giving my server more ram.

Your machine has less than ~118MB of free memory:
Native memory allocation (mmap) failed to map 118489088 bytes for committing reserved memory
You will need to increase or free up memory on the machine to get the JVM to start. If it's running, you can change the heap memory settings of the JVM using the following environment variable:
KAFKA_HEAP_OPTS="-Xms256M -Xmx2G"

Related

unable to make connection between MSSQL and python image using Docker compose

I am using Docker compose to build images of python and MSSQL and making connection between DB and python app and added DB container as a server in python.connection file but getting errors like
pyodbc.OperationalError: ('08001', '[08001] [FreeTDS][SQL Server]Unable to connect to data source (0) (SQLDriverConnect)')
Adaptive Server is unavailable or does not exist
'DRIVER={FreeTDS};''SERVER=MSSQL_DB;''PORT=1433;' 'DATABASE=MYDATABASE;''TDS_Version=7.4', autocommit=True) my_python_app | pyodbc.OperationalError: ('08001', '[08001] [FreeTDS][SQL Server]Unable to connect to data source (0) (SQLDriverConnect)')

ORA-12170: TNS:Connect timeout occurred IBM Cloud Pak

I was trying to connect DataStage on IBM Cloud Pak to my Oracle database using 'Oracle (optimize) connection' and kept having the error said:
"The test was not successful.
The assets request failed: Connection failed: {"failure_message":"[The connector could not establish connection to the specified Oracle server. Method: OCIServerAttach, Error code: 12170, Error message ORA-12170: TNS:Connect timeout occurred.]","status":"failure"}". What is the possible reason, and what should I do?

Snowflake pyspark connector exception net.snowflake.client.jdbc.SnowflakeSQLException

I am facing below exception while trying to connect to snowflake to pyspark:
py4j.protocol.Py4JJavaError: An error occurred while calling o117.load.
: net.snowflake.client.jdbc.SnowflakeSQLException: !200051!
at net.snowflake.client.core.SFBaseSession.getHttpClientKey(SFBaseSession.java:321)
at net.snowflake.client.core.SFSession.open(SFSession.java:408)
at net.snowflake.client.jdbc.DefaultSFConnectionHandler.initialize(DefaultSFConnectionHandler.java:104)
at net.snowflake.client.jdbc.DefaultSFConnectionHandler.initializeConnection(DefaultSFConnectionHandler.java:79)
at net.snowflake.client.jdbc.SnowflakeConnectionV1.initConnectionWithImpl(SnowflakeConnectionV1.java:116)
at net.snowflake.client.jdbc.SnowflakeConnectionV1.<init>(SnowflakeConnectionV1.java:96)
at net.snowflake.client.jdbc.SnowflakeDriver.connect(SnowflakeDriver.java:172)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:208)
at net.snowflake.spark.snowflake.JDBCWrapper.getConnector(SnowflakeJDBCWrapper.scala:209)
It looks like you are behind a firewall or a proxy server. I suggest using the Snowflake connectivity diagnostic tool SnowCD to make sure that all Snowflake URLs are reachable. If you see any errors, then you might want to check your firewall configuration or add a proxy configuration to spark the connection.

quickstart error - Using EF Migrations for local SQL Server and keep loosing db connection

I am following IdentityServer4 quickstart and trying to migrate in memory data to my local SQL Server (not SQL express or LocalDB that came with VS). My connection string is:
#"Server=localhost,1434;Database=MyIDS;user id=tester_1;Password=tester_1;trusted_connection=yes;".
When I start my IdentityServer, it creates the enpty db, MyIDS, and then throw an exception with 2 inner exceptions:
Inner Exception 1:
SqlException: A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: TCP Provider, error: 0 - An established connection was aborted by the software in your host machine.)
Inner Exception 2:
Win32Exception: An established connection was aborted by the software in your host machine.
Can anyone tell me what's going on here? Why a working connection always gets dropped?
localhost,1434 looks wrong, you don't need to provide the port 1434 and the commma should not be used either, it should be a colon in that case.
I typically use for local development:
server=.;Database=ASPIdentity;Trusted_Connection=True;
dot means localhost, if you use sqlexpress the connetion string would become
server=.\\sqlexpress;Database=ASPIdentity;Trusted_Connection=True;

Streamsets DC and Crate exception. ERROR: SQLParseException: line 1:13: no viable alternative at input 'CHARACTERISTICS'

I am trying to connect to Crate as a Streamsets Data collector pipeline origin ( JDBC Consumer ). However I get this error: "JDBC_00 - Cannot connect to specified database: com.streamsets.pipeline.api.StageException: JDBC_06 - Failed to initialize connection pool: com.zaxxer.hikari.pool.PoolInitializationException: Exception during pool initialization: ERROR: SQLParseException: line 1:13: no viable alternative at input 'CHARACTERISTICS' "
Why am I getting this error ? The Crate JDBC Driver version is 2.1.5 and Streamsets Data collector version is 2.4.0.0.
#gashey already solved the issue. Within Streamsets DC uncheck Enforce Read-only Connection on the Advanced tab of my JDBC query consumer configuration
(see https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/crateio/hBexxel2KQw/kU34mrsJBgAJ).
We will update the streamsets-documentation with the workaround. https://crate.io/docs/tools/streamsets/