I want to load a large dataset (750 GB) into Skyrise. For this I use
copy LINEITEM from 's3://myBucket/'
credentials 'aws_access_key_id=key;aws_secret_access_key=secret'
null as '\000'
DELIMITER ','
region 'us-east-1'
ESCAPE;
After about 10 minutes I get
Unable to execute HTTP request: Connect to <some IP> failed: Connection refused (Connection refused)
I am able to load other datasets. What is the issue here?
It's likely that your connection was dropped due a timeout. Please review the following document for steps to correct this issue:
"Troubleshooting connection issues in Amazon Redshift"
Related
I am trying to import data from my local PostgreSQL database to neo4j
Firstly, I load the JDBC driver into memory
CALL apoc.load.driver("org.postgresql.Driver")
Then, I run this query to ingest data from postgresql
WITH "jdbc:postgresql://localhost:5432/graph-test?user=kt" as url
CALL apoc.load.jdbc(url,"os.operating_systems") YIELD row AS line
MERGE (o:Os {name: line.name})
MERGE (of:OsFamily {name: line.familly})
MERGE (o)-[:FROM]->(of)
Unfortunately, I received this error.
Failed to invoke procedure `apoc.load.jdbc`: Caused by: java.net.ConnectException: Connection refused (Connection refused)
Could this error be caused by the incorrect URL, jdbc plugin version, or something else?
I tried to connect with
*mongo "mongodb+srv://cluster0.mi3o1.mongodb.net/test" --username cristian* in the shell.
But instead, it looks like it's trying to connect with:
*mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb*
I am getting the error
*Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: No connection could be made because the target machine actively refused it. :
connect#src/mongo/shell/mongo.js:374:17
#(connect):2:6
exception: connect failed
exiting with code 1
bash: mongodb+srv://cluster0.mi3o1.mongodb.net/test: No such file or directory*
I have created the cluster, database access with admin role user cristian, whitelisted both my IP and all IPs 0.0.0.0/0, created a new database, loaded sample databases, opened ports 27015,27016,27017 and tested on portquiz.net.
I added a PRTSCN.
Please help!
terminal prtscn
Thank you very much D. SM for your help, it looks like it was a problem with my .bash_profile. When i did the copy and paste for the 2 paths i dont know why the terminal writed the last " on another row and got some spaces between. I rewrited the 2 rows with no space like this:
alias mongod="/c/Program\ files/MongoDB/Server/4.4/bin/mongod.exe"
alias mongo="/c/Program\ Files/MongoDB/Server/4.4/bin/mongo.exe"
and it worked.
I am following IdentityServer4 quickstart and trying to migrate in memory data to my local SQL Server (not SQL express or LocalDB that came with VS). My connection string is:
#"Server=localhost,1434;Database=MyIDS;user id=tester_1;Password=tester_1;trusted_connection=yes;".
When I start my IdentityServer, it creates the enpty db, MyIDS, and then throw an exception with 2 inner exceptions:
Inner Exception 1:
SqlException: A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: TCP Provider, error: 0 - An established connection was aborted by the software in your host machine.)
Inner Exception 2:
Win32Exception: An established connection was aborted by the software in your host machine.
Can anyone tell me what's going on here? Why a working connection always gets dropped?
localhost,1434 looks wrong, you don't need to provide the port 1434 and the commma should not be used either, it should be a colon in that case.
I typically use for local development:
server=.;Database=ASPIdentity;Trusted_Connection=True;
dot means localhost, if you use sqlexpress the connetion string would become
server=.\\sqlexpress;Database=ASPIdentity;Trusted_Connection=True;
i am trying to set up a connector that fetches data from an SQL server to use with apache kafka. I've set up all of the kafka services with a docker-compose file, however the SQL server is on another server.
This is the configuration of my debezium connector in ksqldb:
create source connector sql_connector with
('connector.class'='io.debezium.connector.sqlserver.SqlServerConnector',
'database.server.name'='sqlserver',
'database.hostname'= 'xxxxx',
'database.port'='1433',
'database.user'= 'xxxx',
'database.password'= 'xxxxxx',
'database.dbname'='xxxxxxxxx',
'database.history.kafka.bootstrap.servers'='broker:29092',
'database.history.kafka.topic'='dbz_dbhistory.sqlserver.asgard-01');
When i do this, i get a response that the connector is succesfully created however when i query ksqldb by using 'show connectors' for my connectors i get the following error message:
io.confluent.ksql.util.KsqlServerException: org.apache.http.conn.HttpHostConnectException: Connect to connect:8083 [connect/172.18.0.6] failed: Connection refused (Connection refused)
Caused by: org.apache.http.conn.HttpHostConnectException: Connect to
connect:8083 [connect/172.18.0.6] failed: Connection refused (Connection
refused)
Caused by: Could not connect to the server.
Caused by: Could not connect to the server.
When i inspect my kafka connect logs i can see that its issueing select statements to the server but after a while i get the following error and my kafka connect shuts down:
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000b5400000, 118489088, 0) failed; error='Not enough space' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 118489088 bytes for committing reserved memory.
# An error report file with more information is saved as:
any ideas on how to fix this? Other then just giving my server more ram.
Your machine has less than ~118MB of free memory:
Native memory allocation (mmap) failed to map 118489088 bytes for committing reserved memory
You will need to increase or free up memory on the machine to get the JVM to start. If it's running, you can change the heap memory settings of the JVM using the following environment variable:
KAFKA_HEAP_OPTS="-Xms256M -Xmx2G"
We are using Azure database for PostgreSQL ( Service ) for creating DB for each user when user register to the application ( less than 25 users databases right now ).
For reporting purpose we need information which each user's DB size.
To retrieve database size we have a Postgres function which fires the following query
SELECT pg_database.datname , pg_database_size(pg_database.datname) FROM
pg_database
We execute this function every hour throw azure function but at random time Postgres throw exceptions
Exception: Npgsql.PostgresException (0x80004005): 58P01: could not read directory "base/16452": No such file or directory at...
Exception remain same at most of the time with different directory or file location
Sometimes it also throws the exception
Exception: Npgsql.NpgsqlException (0x80004005): Exception while reading from stream ---> System.IO.IOException: Unable to read data from the transport connection: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. ---> System.Net.Sockets.SocketException
Working on the solution at the MSDN forums here.