I try to run a script from Google CPB100 - Lab3b (train_and_apply.py) with dataproc against SLQ (mysql ddbb) but I get a timeout.
Caused by: java.net.ConnectException: Connection timed out (Connection timed out)
From the dataproc master I can connect with the mysql command line, but no with the python commands from the script. What can I do to diagnostic this issue?
Success
$> mysql --host=35.194.7.XXX --user=root --password
Timeout
$> pyspark
%> jdbcDriver='com.mysql.jdbc.Driver'
%> jdbcUrl='jdbc:mysql://35.194.7.XXX:3306/recommendation_spark?user=root&password=XXXX'
%> dfRates = sqlContext.read.format('jdbc').options(driver=jdbcDriver, url=jdbcUrl, dbtable='Rating').load()
I'm not sure what is wrong based on your question, but I would recommend editing the log4j config as described in this StackOverflow post to see if there are important info or debug logs under com.mysql or org.apache.spark.sql.jdbc.
Related
When I'm trying to run Liquibase in Github actions workflows getting the below error
Unexpected error running Liquibase: Connection could not be created to jdbc:mariadb://10.13.10.2:3306/liquibase_test with driver org.mariadb.jdbc.Driver. Socket fail to connect to host:address=(host=10.13.10.2)(port=3306)(type=primary). Connect timed out
not only in github actions when I'm trying to do diff generation in server where liquibase is installed getting the same error.
Unexpected error running Liquibase: Connection could not be created to jdbc:mariadb://10.13.10.2:3306/liquibase_test with driver org.mariadb.jdbc.Driver. Socket fail to connect to host:address=(host=10.13.10.2)(port=3306)(type=primary). Connect timed out
Can anyone please help me?
I have tried everyting to connect my Chainlink node up to my postgresql database with no luck. I have scoured the interwebs for answers to no avail...
Here is the error message I am receiving:
[ERROR] failed to initialize database, got error failed to connect to `host=/tmp user=root database=`: dial error (dial unix /tmp/.s.PGSQL.5432: connect: no such file or directory)
Here is my .env file:
ROOT=/chainlink
LOG_LEVEL=debug
ETH_CHAIN_ID=42
MIN_OUTGOING_CONFIRMATIONS=2
LINK_CONTRACT_ADDRESS=0xa36085F69e2889c224210F603D836748e7dC0088
CHAINLINK_TLS_PORT=0
SECURE_COOKIES=false
GAS_UPDATER_ENABLED=true
ALLOW_ORIGINS=*
ETH_URL=wss://kovan.infura.io/ws/v3/id...
DATABASE_URL=https://chainlink-db-url://postgres:Password#chainlink-kovan:5432
I have tried every configuration of the connection string. Also I am able to connect to the db via pgAdmin no problem and the dbs are publicaly accessible.
The postgresql database is on AWS.
Please change the syntax of your DATABASE_URL to:
DATABASE_URL=postgresql://"username":"password"#"public-ip-pg-server":5432/"database-name"
just change:
"username" : you need to configure a new user, because the default/admin user postgres will not work for it.
"password" : password of the user
"public-ip-pg-server" : the public ip address of your postgresql-server
"database-name" : the name of your database
PS: delete all " in your syntax (;
Here is the link to the official documentation: https://docs.chain.link/docs/connecting-to-a-remote-database/
CentOS 7
Docker 20.10.5
In my machine run PostgreSQL 9.5 and success open my db:
localhost:5432/sonar
And success open DB by PGAdmin
Nice.
Now in Docker I installed SonarQube 4.5. And want to connect to my db.
I try this:
sudo docker run -e SONARQUBE_JDBC_USERNAME=sonar -e SONARQUBE_JDBC_PASSWORD=sonar -e SONARQUBE_JDBC_URL=jdbc:postgresql://localhost:5432/sonar sonarqube:4.5.7
But I get error:
2021.04.20 11:47:55 INFO web[o.s.s.p.ServerImpl] SonarQube Server / 4.5.7 / e2afb0bff1b8be759789d2c1bc9348de6f519f83
2021.04.20 11:47:55 INFO web[o.s.c.p.Database] Create JDBC datasource for jdbc:postgresql://localhost:5432/sonar
2021.04.20 11:47:55 ERROR web[o.a.c.c.C.[.[.[/]] Exception sending context initialized event to listener instance of class org.sonar.server.platform.PlatformServletContextListener
java.lang.IllegalStateException: Can not connect to database. Please check connectivity and settings (see the properties prefixed by 'sonar.jdbc.').
at org.sonar.core.persistence.DefaultDatabase.checkConnection(DefaultDatabase.java:115) ~[sonar-core-4.5.7.jar:na]
at org.sonar.core.persistence.DefaultDatabase.start(DefaultDatabase.java:73) ~[sonar-core-4.5.7.jar:na]
I tried to connect with
*mongo "mongodb+srv://cluster0.mi3o1.mongodb.net/test" --username cristian* in the shell.
But instead, it looks like it's trying to connect with:
*mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb*
I am getting the error
*Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: No connection could be made because the target machine actively refused it. :
connect#src/mongo/shell/mongo.js:374:17
#(connect):2:6
exception: connect failed
exiting with code 1
bash: mongodb+srv://cluster0.mi3o1.mongodb.net/test: No such file or directory*
I have created the cluster, database access with admin role user cristian, whitelisted both my IP and all IPs 0.0.0.0/0, created a new database, loaded sample databases, opened ports 27015,27016,27017 and tested on portquiz.net.
I added a PRTSCN.
Please help!
terminal prtscn
Thank you very much D. SM for your help, it looks like it was a problem with my .bash_profile. When i did the copy and paste for the 2 paths i dont know why the terminal writed the last " on another row and got some spaces between. I rewrited the 2 rows with no space like this:
alias mongod="/c/Program\ files/MongoDB/Server/4.4/bin/mongod.exe"
alias mongo="/c/Program\ Files/MongoDB/Server/4.4/bin/mongo.exe"
and it worked.
I'm trying to connect to a remote Mongo database from a EMR cluster. The following code is executed with the command spark-shell --packages com.stratio.datasource:spark-mongodb_2.10:0.11.2:
import com.stratio.datasource.mongodb._
import com.stratio.datasource.mongodb.config._
import com.stratio.datasource.mongodb.config.MongodbConfig._
val builder = MongodbConfigBuilder(Map(Host -> List("[IP.OF.REMOTE.HOST]:3001"), Database -> "meteor", Collection ->"my_target_collection", ("user", "user_name"), ("database", "meteor"), ("password", "my_password")))
val readConfig = builder.build()
val mongoRDD = sqlContext.fromMongoDB(readConfig)
Spark-shell responds with the following error:
16/07/26 15:44:35 INFO SparkContext: Starting job: aggregate at MongodbSchema.scala:47
16/07/26 15:44:45 WARN DAGScheduler: Creating new stage failed due to exception - job: 1
com.mongodb.MongoTimeoutException: Timed out after 10000 ms while waiting to connect. Client view of cluster state is {type=Unknown, servers=[{address=[IP.OF.REMOTE.HOST]:3001, type=Unknown, state=Connecting, exception={java.lang.IllegalArgumentException: response too long: 1347703880}}]
at com.mongodb.BaseCluster.getDescription(BaseCluster.java:128)
at com.mongodb.DBTCPConnector.getClusterDescription(DBTCPConnector.java:394)
at com.mongodb.DBTCPConnector.getType(DBTCPConnector.java:571)
at com.mongodb.DBTCPConnector.getReplicaSetStatus(DBTCPConnector.java:362)
at com.mongodb.Mongo.getReplicaSetStatus(Mongo.java:446)
.
.
.
After reading for a while, a few responses here in SO and other forums state that the java.lang.IllegalArgumentException: response too long: 1347703880 error might be caused by a faulty Mongo driver. Based on that I started executing spark-shell with updated drivers like so:
spark-shell --packages com.stratio.datasource:spark-mongodb_2.10:0.11.2 --jars casbah-commons_2.10-3.1.1.jar,casbah-core_2.10-3.1.1.jar,casbah-query_2.10-3.1.1ja.jar,mongo-java-driver-2.13.0.jar
Of course before this I downloaded the jars and stored them in the same route as the spark-shell was executed. Nonetheless, with this approach spark-shell answers with the following cryptic error message:
Exception in thread "dag-scheduler-event-loop" java.lang.NoClassDefFoundError: com/mongodb/casbah/query/dsl/CurrentDateOp
at com.mongodb.casbah.MongoClient.apply(MongoClient.scala:218)
at com.stratio.datasource.mongodb.partitioner.MongodbPartitioner.isShardedCollection(MongodbPartitioner.scala:78)
It is worth mentioning that the target MongoDB is a Meteor Mongo database, that's why I'm trying to connect with [IP.OF.REMOTE.HOST]:3001 instead of using the port 27017.
What might be the issue? I've followed many tutorials but all of them seem to have the MongoDB in the same host, allowing them to declare localhost:27017 in the credentials. Is there something I'm missing?
Thanks for the help!
I ended up using MongoDB's official Java driver instead. This was my first experience with Spark and the Scala programming language, so I wasn't very familiar with the idea of using plain Java JARs yet.
The solution
I downloaded the necessary JARs and stored them in the same directory as the job file, which is a Scala file. So the directory looked something like:
/job_directory
|--job.scala
|--bson-3.0.1.jar
|--mongodb-driver-3.0.1.jar
|--mongodb-driver-core-3.0.1.jar
Then, I start spark-shell as follows to load the JARs and its classes into the shell environment:
spark-shell --jars "mongodb-driver-3.0.1.jar,mongodb-driver-core-3.0.1.jar,bson-3.0.1.jar"
Next, I execute the following to load the source code of the job into the spark-shell:
:load job.scala
Finally I execute the main object in my job like so:
MainObject.main(Array())
As of the code inside the MainObject, it is merely as the tutorial states:
val mongo = new MongoClient(IP_OF_REMOTE_MONGO , 27017)
val db = mongo.getDB(DB_NAME)
Hopefully this will help future readers and spark-shell/Scala beginners!