sonarqube ignores postgres and picks embedded database - postgresql

I tried to configure sonarqube (7.6) to use postgresql database and it seems to ignore the entries in the sonar.properties and picks up the H2 database. When I pass the properties as jvm parameters, it picks up fine. How do I configure sonar to use postgres from the sonar.properties
sonar.properties:
sonar.jdbc.username=myuser
sonar.jdbc.password=mypwd
sonar.jdbc.url=jdbc:postgresql://ipaddress:5444/mypostgres
sonar.log.level=DEBUG
Log snippet:
2019.03.25 05:42:03 INFO web[][o.s.s.e.EsClientProvider] Connected to local Elasticsearch: [127.0.0.1:9001]
2019.03.25 05:42:03 INFO web[][o.s.s.p.LogServerVersion] SonarQube Server / 7.6.0.21501 / d56689a5eb122c06cf87375828085609f5a68323
2019.03.25 05:42:03 INFO web[][o.s.s.p.d.EmbeddedDatabase] Starting embedded database on port 9092 with url jdbc:h2:tcp://127.0.0.1:9092/sonar
2019.03.25 05:42:03 INFO web[][o.s.s.p.d.EmbeddedDatabase] Embedded database started. Data stored in: /opt/sonarqube/data
2019.03.25 05:42:03 INFO web[][o.sonar.db.Database] Create JDBC data source for jdbc:h2:tcp://127.0.0.1:9092/sonar
2019.03.25 05:42:03 WARN web[][o.s.db.dialect.H2] H2 database should be used for evaluation purpose only.
2019.03.25 05:42:05 INFO web[][o.s.s.p.ServerFileSystemImpl] SonarQube home: /opt/sonarqube

Related

Siddhi cdc Postgres App Not giving siddhi logs

I have created the following Siddhi CDC app to capture data changes in a PostgreSQL database table.
#App:name('post')
#source(type = 'cdc' ,url = 'jdbc:postgresql://postgres:5432/shipment_db',
username = 'postgresuser', password = 'postgrespw',
table.name = 'public.shipments', operation = 'insert', plugin.name='pgoutput',slot.name='postslot',
#map(type='keyvalue', #attributes(shipment_id = 'shipment_id', order_id = 'order_id',date_created='date_created',status='status')))
define stream inputStream (shipment_id long, order_id long,date_created string, status string);
#sink(type = 'log')
define stream OutputStream (shipment_id long, date_created string);
#info(name = 'query1')
from inputStream
select shipment_id, date_created
insert into OutputStream;
I placed siddhi-io-cdc-2.0.12.jar and siddhi-core-5.1.21.jar in ./files/bundles directory, org.wso2.carbon.si.metrics.core-3.0.57.jar and postgresql-42.3.3.jar in ./files/jars directory and created a Docker image named siddhiimgpostgres out of https://siddhi.io/en/v5.1/docs/config-guide/#adding-to-siddhi-docker-microservice dockerfile.
Following is the docker command I used to run the siddhi app.
docker run -it --net postgres-docker_default --rm -p 8006:8006 -v /home/me/siddhi-apps:/apps siddhiimgpostgres:tag1 -Dapps=/apps/post.siddhi
Following are the logs I got.
[2022-08-24 06:35:43,975] INFO {io.debezium.relational.RelationalSnapshotChangeEventSource} - Snapshot step 7 - Snapshotting data
[2022-08-24 06:35:43,976] INFO {io.debezium.relational.RelationalSnapshotChangeEventSource} - Exporting data from table 'public.shipments'
[2022-08-24 06:35:43,976] INFO {io.debezium.relational.RelationalSnapshotChangeEventSource} - For table 'public.shipments' using select statement: 'SELECT * FROM "public"."shipments"'
[2022-08-24 06:35:43,995] INFO {io.debezium.relational.RelationalSnapshotChangeEventSource} - Finished exporting 11 records for table 'public.shipments'; total duration '00:00:00.019'
[2022-08-24 06:35:43,997] INFO {io.debezium.pipeline.source.AbstractSnapshotChangeEventSource} - Snapshot - Final stage
[2022-08-24 06:35:43,998] INFO {io.debezium.pipeline.ChangeEventSourceCoordinator} - Snapshot ended with SnapshotResult [status=COMPLETED, offset=PostgresOffsetContext [sourceInfoSchema=Schema{io.debezium.connector.postgresql.Source:STRUCT}, sourceInfo=source_info[server='postgres_5432'db='shipment_db', lsn=LSN{0/16DCA30}, txId=592, timestamp=2022-08-24T06:35:43.994Z, snapshot=FALSE, schema=public, table=shipments], partition={server=postgres_5432}, lastSnapshotRecord=true, lastCompletelyProcessedLsn=null, lastCommitLsn=null, transactionContext=TransactionContext [currentTransactionId=null, perTableEventCount={}, totalEventCount=0]]]
[2022-08-24 06:35:44,001] INFO {io.debezium.pipeline.metrics.StreamingChangeEventSourceMetrics} - Connected metrics set to 'true'
[2022-08-24 06:35:44,001] INFO {io.debezium.pipeline.ChangeEventSourceCoordinator} - Starting streaming
[2022-08-24 06:35:44,001] INFO {io.debezium.connector.postgresql.PostgresStreamingChangeEventSource} - Retrieved latest position from stored offset 'LSN{0/16DCA30}'
[2022-08-24 06:35:44,002] INFO {io.debezium.connector.postgresql.connection.WalPositionLocator} - Looking for WAL restart position for last commit LSN 'null' and last change LSN 'LSN{0/16DCA30}'
[2022-08-24 06:35:44,002] INFO {io.debezium.connector.postgresql.connection.PostgresReplicationConnection} - Initializing PgOutput logical decoder publication
[2022-08-24 06:35:44,017] INFO {io.debezium.connector.postgresql.connection.PostgresConnection} - Obtained valid replication slot ReplicationSlot [active=false, latestFlushedLsn=LSN{0/16DB220}, catalogXmin=585]
[2022-08-24 06:35:44,021] INFO {io.debezium.jdbc.JdbcConnection} - Connection gracefully closed
[2022-08-24 06:35:44,072] INFO {io.debezium.connector.postgresql.PostgresSchema} - REPLICA IDENTITY for 'public.shipments' is 'DEFAULT'; UPDATE and DELETE events will contain previous values only for PK columns
[2022-08-24 06:35:44,073] INFO {io.debezium.connector.postgresql.PostgresStreamingChangeEventSource} - Searching for WAL resume position
I am only getting logs on the number of data records in the table. Can I know why I am not getting Siddhi logs on that which data are in the database table?
Thank you!
Is it capturing the changed data after starting up the server? If yes, the issue should be the snapshot mode[1] configured when creating the connection. By default, it is set to initial which will take a snapshot at the connection establishment and start to read the changed data from that position onward.
[1] https://debezium.io/documentation/reference/1.4/connectors/postgresql.html#postgresql-property-snapshot-mode

IllegalStateException: Can not connect to database. Please check connectivity and settings

CentOS 7
Docker 20.10.5
In my machine run PostgreSQL 9.5 and success open my db:
localhost:5432/sonar
And success open DB by PGAdmin
Nice.
Now in Docker I installed SonarQube 4.5. And want to connect to my db.
I try this:
sudo docker run -e SONARQUBE_JDBC_USERNAME=sonar -e SONARQUBE_JDBC_PASSWORD=sonar -e SONARQUBE_JDBC_URL=jdbc:postgresql://localhost:5432/sonar sonarqube:4.5.7
But I get error:
2021.04.20 11:47:55 INFO web[o.s.s.p.ServerImpl] SonarQube Server / 4.5.7 / e2afb0bff1b8be759789d2c1bc9348de6f519f83
2021.04.20 11:47:55 INFO web[o.s.c.p.Database] Create JDBC datasource for jdbc:postgresql://localhost:5432/sonar
2021.04.20 11:47:55 ERROR web[o.a.c.c.C.[.[.[/]] Exception sending context initialized event to listener instance of class org.sonar.server.platform.PlatformServletContextListener
java.lang.IllegalStateException: Can not connect to database. Please check connectivity and settings (see the properties prefixed by 'sonar.jdbc.').
at org.sonar.core.persistence.DefaultDatabase.checkConnection(DefaultDatabase.java:115) ~[sonar-core-4.5.7.jar:na]
at org.sonar.core.persistence.DefaultDatabase.start(DefaultDatabase.java:73) ~[sonar-core-4.5.7.jar:na]

Sqoop - Postgres No Conecction Parameters Specified

I am trying to connect to Postgres DB using Sqoop (the end goal is to import tables directly into HDFS), however I am facing the issue below.
sqoop list-tables --connect jdbc:postgresql://<server_name>:5432/aae_data --username my_username -P --verbose
Warning: /opt/cloudera/parcels/CDH-5.9.1-1.cdh5.9.1.p2260.2452/bin/../lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
18/04/24 00:13:40 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.9.1
18/04/24 00:13:40 DEBUG tool.BaseSqoopTool: Enabled debug logging.
Enter password:
18/04/24 00:13:44 DEBUG sqoop.ConnFactory: Loaded manager factory: org.apache.sqoop.manager.oracle.OraOopManagerFactory
18/04/24 00:13:44 DEBUG sqoop.ConnFactory: Loaded manager factory: com.cloudera.sqoop.manager.DefaultManagerFactory
18/04/24 00:13:44 DEBUG sqoop.ConnFactory: Trying ManagerFactory: org.apache.sqoop.manager.oracle.OraOopManagerFactory
18/04/24 00:13:45 DEBUG oracle.OraOopManagerFactory: Data Connector for Oracle and Hadoop can be called by Sqoop!
18/04/24 00:13:45 DEBUG sqoop.ConnFactory: Trying ManagerFactory: com.cloudera.sqoop.manager.DefaultManagerFactory
18/04/24 00:13:45 DEBUG manager.DefaultManagerFactory: Trying with scheme: jdbc:postgresql:
18/04/24 00:13:45 INFO manager.SqlManager: Using default fetchSize of 1000
18/04/24 00:13:45 DEBUG sqoop.ConnFactory: Instantiated ConnManager org.apache.sqoop.manager.PostgresqlManager#56a6d5a6
18/04/24 00:13:45 DEBUG manager.SqlManager: No connection paramenters specified. Using regular API for making connection.
Does anyone know what might be the issue here?
Do I need to specify a connection manager? If yes, how do I pass the jar file?
Thank You.

Odoo v8 server won't start from eclipse

I am trying to start Odoo v8 server from Eclipse ide. I have set the debug configurations and have given the config file path in the arguments as
-c /etc/odoo-server.conf.
When I do debug as python run, I do not get any error. The log file also does not show any error. But when I open localhost:8069 from the browser.
I get server not found error. This does not happen when I start the server through the terminal. Can anyone please tell me what could be the problem?
Below is the odoo-server.conf content:
[options]
; This is the password that allows database operations:
admin_passwd = admin
db_host = False
db_port = False
db_user = odoo
db_password = odoo
addons_path = /opt/odoo/addons
logfile = /var/log/odoo/odoo-server.log
Below is the server traceback:
2014-11-15 07:47:06,205 3875 INFO ? openerp: OpenERP version 8.0
2014-11-15 07:47:06,206 3875 INFO ? openerp: addons paths: ['/home/hassan/.local/share/Odoo/addons/8.0', u'/opt/odoo/addons', '/opt/odoo/openerp/addons']
2014-11-15 07:47:06,207 3875 INFO ? openerp: database hostname: localhost
2014-11-15 07:47:06,207 3875 INFO ? openerp: database port: 5432
2014-11-15 07:47:06,207 3875 INFO ? openerp: database user: odoo
2014-11-15 07:47:07,046 3875 INFO ? openerp.service.server: Evented Service (longpolling) running on 0.0.0.0:8072
Try to check whether the configuration file that you have set is having appropriate access rights. And try not to log the errors into a file, instead allow it show it on the console on Eclipse IDE.
There's nothing wrong with your "traceback": is a normal log, with only INFO messages and it tells you that the server is running and waiting for requests on port 8072.
Point a browser at http://localhost:8072 and you should see a login page.
I know my answer is late, but i ran into this problem today, and got the server running, so thought I should share :
Do not start the openerp server as : Debug as -> Python run.
It only starts the longpolling service.
Try running as : Run as -> Python run
This will start the Http service at your defined port or by default at 8069.

Alembic / Flask-migrate migration on Heroku runs but does not create tables

I am attempting to deploy a Flask app to Heroku. I have pushed to Heroku and can access my login page but any call to the db gives an OperationalError:
2014-01-29T12:12:31.801772+00:00 app[web.1]: OperationalError: (OperationalError) no such table: projects_project u'SELECT
Using Flask-migrate I can successfully run local migrations and upgrades:
INFO [alembic.migration] Context impl PostgresqlImpl.
INFO [alembic.migration] Will assume transactional DDL.
INFO [alembic.migration] Running upgrade None -> 4b56d58e1d4c, empty message
When I try to upgrade on Heroku using heroku run python manage.py db upgrade the upgrade appears to happen, but the Context impl. is now SQLite?:
(dev01)Toms-MacBook-Pro:dev01 kt$ heroku run python manage.py db upgrade
Running `python manage.py db upgrade` attached to terminal... up, run.9069
INFO [alembic.migration] Context impl SQLiteImpl.
INFO [alembic.migration] Will assume non-transactional DDL.
INFO [alembic.migration] Running upgrade None -> 4b56d58e1d4c, empty message
Running Heroku pg:info gives:
=== HEROKU_POSTGRESQL_PINK_URL (DATABASE_URL)
Plan: Dev
Status: available
Connections: 0
PG Version: 9.3.2
Created: 2014-01-27 18:55 UTC
Data Size: 6.4 MB
Tables: 0
Rows: 0/10000 (In compliance)
Fork/Follow: Unsupported
Rollback: Unsupported
The relevant logs for the Heroku upgrade are:
2014-01-29T12:55:40.112436+00:00 heroku[api]: Starting process with command `python manage.py db upgrade` by kt#gmail.com
2014-01-29T12:55:44.638957+00:00 heroku[run.9069]: Awaiting client
2014-01-29T12:55:44.667692+00:00 heroku[run.9069]: Starting process with command `python manage.py db upgrade`
2014-01-29T12:55:44.836337+00:00 heroku[run.9069]: State changed from starting to up
2014-01-29T12:55:46.643857+00:00 heroku[run.9069]: Process exited with status 0
2014-01-29T12:55:46.656134+00:00 heroku[run.9069]: State changed from up to complete
Also, heroku config gives me:
(dev01)Toms-MacBook-Pro:dev01 kt$ heroku config
=== myapp Config Vars
DATABASE_URL: postgres://xxx.compute-1.amazonaws.com:5432/da0jtkatk6057v
HEROKU_POSTGRESQL_PINK_URL: postgres://xxx.compute-1.amazonaws.com:5432/da0jtkatk6057v
where [xxx == xxx]
How is the Context impl. set? Apart from this obvious difference between working local and heroku, I can't work out what's happening or how I should debug. Thanks.
The URL for the database is taken from the SQLALCHEMY_DATABASE_URI configuration in your Flask app instance. This happens in the env.py configuration for Alembic that was created in the migrations folder.
Are you storing the value of os.environ['DATABASE_URL'] in the configuration before you hand over control to Flask-Migrate and Alembic? It seems you have a default SQLite based database that never gets overwritten with the real one provided by Heroku.