CentOS 7
Docker 20.10.5
In my machine run PostgreSQL 9.5 and success open my db:
localhost:5432/sonar
And success open DB by PGAdmin
Nice.
Now in Docker I installed SonarQube 4.5. And want to connect to my db.
I try this:
sudo docker run -e SONARQUBE_JDBC_USERNAME=sonar -e SONARQUBE_JDBC_PASSWORD=sonar -e SONARQUBE_JDBC_URL=jdbc:postgresql://localhost:5432/sonar sonarqube:4.5.7
But I get error:
2021.04.20 11:47:55 INFO web[o.s.s.p.ServerImpl] SonarQube Server / 4.5.7 / e2afb0bff1b8be759789d2c1bc9348de6f519f83
2021.04.20 11:47:55 INFO web[o.s.c.p.Database] Create JDBC datasource for jdbc:postgresql://localhost:5432/sonar
2021.04.20 11:47:55 ERROR web[o.a.c.c.C.[.[.[/]] Exception sending context initialized event to listener instance of class org.sonar.server.platform.PlatformServletContextListener
java.lang.IllegalStateException: Can not connect to database. Please check connectivity and settings (see the properties prefixed by 'sonar.jdbc.').
at org.sonar.core.persistence.DefaultDatabase.checkConnection(DefaultDatabase.java:115) ~[sonar-core-4.5.7.jar:na]
at org.sonar.core.persistence.DefaultDatabase.start(DefaultDatabase.java:73) ~[sonar-core-4.5.7.jar:na]
I tried to configure sonarqube (7.6) to use postgresql database and it seems to ignore the entries in the sonar.properties and picks up the H2 database. When I pass the properties as jvm parameters, it picks up fine. How do I configure sonar to use postgres from the sonar.properties
sonar.properties:
sonar.jdbc.username=myuser
sonar.jdbc.password=mypwd
sonar.jdbc.url=jdbc:postgresql://ipaddress:5444/mypostgres
sonar.log.level=DEBUG
Log snippet:
2019.03.25 05:42:03 INFO web[][o.s.s.e.EsClientProvider] Connected to local Elasticsearch: [127.0.0.1:9001]
2019.03.25 05:42:03 INFO web[][o.s.s.p.LogServerVersion] SonarQube Server / 7.6.0.21501 / d56689a5eb122c06cf87375828085609f5a68323
2019.03.25 05:42:03 INFO web[][o.s.s.p.d.EmbeddedDatabase] Starting embedded database on port 9092 with url jdbc:h2:tcp://127.0.0.1:9092/sonar
2019.03.25 05:42:03 INFO web[][o.s.s.p.d.EmbeddedDatabase] Embedded database started. Data stored in: /opt/sonarqube/data
2019.03.25 05:42:03 INFO web[][o.sonar.db.Database] Create JDBC data source for jdbc:h2:tcp://127.0.0.1:9092/sonar
2019.03.25 05:42:03 WARN web[][o.s.db.dialect.H2] H2 database should be used for evaluation purpose only.
2019.03.25 05:42:05 INFO web[][o.s.s.p.ServerFileSystemImpl] SonarQube home: /opt/sonarqube
First let's create a hive-enabled spark session:
val spark = SparkSession.builder.config(conf).enableHiveSupport.getOrCreate
Then let's try to connect to the remote dB:
spark.sql("use my_remote_db").show
17/12/10 10:27:02 WARN ObjectStore: Failed to get database my_remote_db, returning NoSuchObjectException
Exception in thread "main" org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException: Database 'my_remote_db' not found;
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.org$apache$spark$sql$catalyst$catalog$SessionCatalog$$requireDbExists(SessionCatalog.scala:173)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.setCurrentDatabase(SessionCatalog.scala:268)
at org.apache.spark.sql.execution.command.SetDatabaseCommand.run(databases.scala:59)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:67)
at org.apache.spark.sql.Dataset.<init>(Dataset.scala:182)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:67)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)
at com.mycompany.sbseg.graph.poc.features.LoadGraphData$.loadGraphData(LoadGraphData.scala:8)
Note: this same code works in the spark-shell.
Here are the additional settings made in Intellij to emulate the bash shell environment used to run spark-shell:
To ensure they were getting set properly they are printed out:
Seq("SPARK_HOME","HIVE_CONF_DIR","HIVE_HOME")
.foreach{ s=>println(s"$s:${System.getenv(s)}")}
These print out the same/correct results as on the command line.
/shared/spark
/Users/sboesch/yarnconf
/usr/local/Cellar/hive/2.1.1/libexec
So it is unclear what the differences might be between the Intellij and the bash environments - and why the code does not work properly in the former.
I'm trying to connect to a remote Mongo database from a EMR cluster. The following code is executed with the command spark-shell --packages com.stratio.datasource:spark-mongodb_2.10:0.11.2:
import com.stratio.datasource.mongodb._
import com.stratio.datasource.mongodb.config._
import com.stratio.datasource.mongodb.config.MongodbConfig._
val builder = MongodbConfigBuilder(Map(Host -> List("[IP.OF.REMOTE.HOST]:3001"), Database -> "meteor", Collection ->"my_target_collection", ("user", "user_name"), ("database", "meteor"), ("password", "my_password")))
val readConfig = builder.build()
val mongoRDD = sqlContext.fromMongoDB(readConfig)
Spark-shell responds with the following error:
16/07/26 15:44:35 INFO SparkContext: Starting job: aggregate at MongodbSchema.scala:47
16/07/26 15:44:45 WARN DAGScheduler: Creating new stage failed due to exception - job: 1
com.mongodb.MongoTimeoutException: Timed out after 10000 ms while waiting to connect. Client view of cluster state is {type=Unknown, servers=[{address=[IP.OF.REMOTE.HOST]:3001, type=Unknown, state=Connecting, exception={java.lang.IllegalArgumentException: response too long: 1347703880}}]
at com.mongodb.BaseCluster.getDescription(BaseCluster.java:128)
at com.mongodb.DBTCPConnector.getClusterDescription(DBTCPConnector.java:394)
at com.mongodb.DBTCPConnector.getType(DBTCPConnector.java:571)
at com.mongodb.DBTCPConnector.getReplicaSetStatus(DBTCPConnector.java:362)
at com.mongodb.Mongo.getReplicaSetStatus(Mongo.java:446)
.
.
.
After reading for a while, a few responses here in SO and other forums state that the java.lang.IllegalArgumentException: response too long: 1347703880 error might be caused by a faulty Mongo driver. Based on that I started executing spark-shell with updated drivers like so:
spark-shell --packages com.stratio.datasource:spark-mongodb_2.10:0.11.2 --jars casbah-commons_2.10-3.1.1.jar,casbah-core_2.10-3.1.1.jar,casbah-query_2.10-3.1.1ja.jar,mongo-java-driver-2.13.0.jar
Of course before this I downloaded the jars and stored them in the same route as the spark-shell was executed. Nonetheless, with this approach spark-shell answers with the following cryptic error message:
Exception in thread "dag-scheduler-event-loop" java.lang.NoClassDefFoundError: com/mongodb/casbah/query/dsl/CurrentDateOp
at com.mongodb.casbah.MongoClient.apply(MongoClient.scala:218)
at com.stratio.datasource.mongodb.partitioner.MongodbPartitioner.isShardedCollection(MongodbPartitioner.scala:78)
It is worth mentioning that the target MongoDB is a Meteor Mongo database, that's why I'm trying to connect with [IP.OF.REMOTE.HOST]:3001 instead of using the port 27017.
What might be the issue? I've followed many tutorials but all of them seem to have the MongoDB in the same host, allowing them to declare localhost:27017 in the credentials. Is there something I'm missing?
Thanks for the help!
I ended up using MongoDB's official Java driver instead. This was my first experience with Spark and the Scala programming language, so I wasn't very familiar with the idea of using plain Java JARs yet.
The solution
I downloaded the necessary JARs and stored them in the same directory as the job file, which is a Scala file. So the directory looked something like:
/job_directory
|--job.scala
|--bson-3.0.1.jar
|--mongodb-driver-3.0.1.jar
|--mongodb-driver-core-3.0.1.jar
Then, I start spark-shell as follows to load the JARs and its classes into the shell environment:
spark-shell --jars "mongodb-driver-3.0.1.jar,mongodb-driver-core-3.0.1.jar,bson-3.0.1.jar"
Next, I execute the following to load the source code of the job into the spark-shell:
:load job.scala
Finally I execute the main object in my job like so:
MainObject.main(Array())
As of the code inside the MainObject, it is merely as the tutorial states:
val mongo = new MongoClient(IP_OF_REMOTE_MONGO , 27017)
val db = mongo.getDB(DB_NAME)
Hopefully this will help future readers and spark-shell/Scala beginners!
I am attempting to deploy a Flask app to Heroku. I have pushed to Heroku and can access my login page but any call to the db gives an OperationalError:
2014-01-29T12:12:31.801772+00:00 app[web.1]: OperationalError: (OperationalError) no such table: projects_project u'SELECT
Using Flask-migrate I can successfully run local migrations and upgrades:
INFO [alembic.migration] Context impl PostgresqlImpl.
INFO [alembic.migration] Will assume transactional DDL.
INFO [alembic.migration] Running upgrade None -> 4b56d58e1d4c, empty message
When I try to upgrade on Heroku using heroku run python manage.py db upgrade the upgrade appears to happen, but the Context impl. is now SQLite?:
(dev01)Toms-MacBook-Pro:dev01 kt$ heroku run python manage.py db upgrade
Running `python manage.py db upgrade` attached to terminal... up, run.9069
INFO [alembic.migration] Context impl SQLiteImpl.
INFO [alembic.migration] Will assume non-transactional DDL.
INFO [alembic.migration] Running upgrade None -> 4b56d58e1d4c, empty message
Running Heroku pg:info gives:
=== HEROKU_POSTGRESQL_PINK_URL (DATABASE_URL)
Plan: Dev
Status: available
Connections: 0
PG Version: 9.3.2
Created: 2014-01-27 18:55 UTC
Data Size: 6.4 MB
Tables: 0
Rows: 0/10000 (In compliance)
Fork/Follow: Unsupported
Rollback: Unsupported
The relevant logs for the Heroku upgrade are:
2014-01-29T12:55:40.112436+00:00 heroku[api]: Starting process with command `python manage.py db upgrade` by kt#gmail.com
2014-01-29T12:55:44.638957+00:00 heroku[run.9069]: Awaiting client
2014-01-29T12:55:44.667692+00:00 heroku[run.9069]: Starting process with command `python manage.py db upgrade`
2014-01-29T12:55:44.836337+00:00 heroku[run.9069]: State changed from starting to up
2014-01-29T12:55:46.643857+00:00 heroku[run.9069]: Process exited with status 0
2014-01-29T12:55:46.656134+00:00 heroku[run.9069]: State changed from up to complete
Also, heroku config gives me:
(dev01)Toms-MacBook-Pro:dev01 kt$ heroku config
=== myapp Config Vars
DATABASE_URL: postgres://xxx.compute-1.amazonaws.com:5432/da0jtkatk6057v
HEROKU_POSTGRESQL_PINK_URL: postgres://xxx.compute-1.amazonaws.com:5432/da0jtkatk6057v
where [xxx == xxx]
How is the Context impl. set? Apart from this obvious difference between working local and heroku, I can't work out what's happening or how I should debug. Thanks.
The URL for the database is taken from the SQLALCHEMY_DATABASE_URI configuration in your Flask app instance. This happens in the env.py configuration for Alembic that was created in the migrations folder.
Are you storing the value of os.environ['DATABASE_URL'] in the configuration before you hand over control to Flask-Migrate and Alembic? It seems you have a default SQLite based database that never gets overwritten with the real one provided by Heroku.