gerrit not starting with postgres - postgresql

Following these instructions:
https://gerrit-documentation.storage.googleapis.com/Documentation/2.11.4/install.html
Installed postgres 9.3:
sudo -i -u postgres
postgres#ubuntu:~$ psql
psql (9.3.10)
Type "help" for help.
postgres=#
The gerrit2 user cannot access psql, even though I created the user and database
postgres#ubuntu:~$ createuser --username=postgres -RDIElPS gerrit2
postgres#ubuntu:~$ createdb --username=postgres -E UTF-8 -O gerrit2 reviewdb
^d
gerrit2#ubuntu:~$ psql
psql: FATAL: database "gerrit2" does not exist
Installed gerrit 2.11.4:
gerrit2#ubuntu:~$ java -jar gerrit-2.11.4.war init -d .
See gerrit.conf result of install:
[gerrit]
basePath = .
canonicalWebUrl = http://ubuntu:8080/
[database]
type = postgresql
hostname = localhost
database = reviewdb
username = gerrit2
[index]
type = LUCENE
[auth]
type = OPENID
[sendemail]
smtpServer = localhost
smtpUser = lmougen5
[container]
user = gerrit2
javaHome = /usr/lib/jvm/java-7-openjdk-amd64/jre
[sshd]
listenAddress = *:22
[httpd]
listenUrl = http://*:8080/
[cache]
directory = cache
gerrit init didn't bring up the browser as per the instructions, so I attempted a reindex. Note errors in output:
gerrit2#ubuntu:~$ java -jar gerrit-2.11.4.war reindex
[2015-11-19 16:42:24,861] INFO com.google.gerrit.server.git.LocalDiskRepositoryManager : Defaulting core.streamFileThreshold to 119m
[2015-11-19 16:42:25,542] INFO com.google.gerrit.server.cache.h2.H2CacheFactory : Enabling disk cache /home/gerrit2/cache
Reindexing changes: done
Reindexed 0 changes in 0.0s (0.0/s)
[2015-11-19 16:42:26,525] WARN com.google.gerrit.server.cache.h2.H2CacheImpl : Cannot build BloomFilter for jdbc:h2:file:/home/gerrit2/cache/diff_intraline: Error opening database: "Sleep interrupted" [8000-174]
[2015-11-19 16:42:26,526] INFO com.google.gerrit.server.cache.h2.H2CacheFactory : Finishing 4 disk cache updates
Then a restart, probably need to address errors first. Along with gerrit2 user's access to psql reviewdb:
gerrit2#ubuntu:~$ ~/bin/gerrit.sh restart
Restart failed, here's ~/logs/error_log:
[2015-11-20 08:12:30,627] INFO com.google.gerrit.server.cache.h2.H2CacheFactory : Enabling disk cache /home/gerrit2/cache
[2015-11-20 08:12:31,970] INFO com.google.gerrit.server.config.ScheduleConfig : gc schedule parameter "gc.interval" is not configured
[2015-11-20 08:12:33,640] WARN com.google.gerrit.httpd.GitWebConfig : gitweb not installed (no /usr/lib/cgi-bin/gitweb.cgi found)
[2015-11-20 08:12:34,415] INFO org.eclipse.jetty.util.log : Logging initialized #8505ms
[2015-11-20 08:12:35,115] INFO com.google.gerrit.server.git.LocalDiskRepositoryManager : Defaulting core.streamFileThreshold to 119m
[2015-11-20 08:12:35,204] INFO com.google.gerrit.server.plugins.PluginLoader : Loading plugins from /home/gerrit2/plugins
[2015-11-20 08:12:35,363] INFO com.google.gerrit.server.plugins.PluginLoader : Loaded plugin commit-message-length-validator, version v2.11.4
[2015-11-20 08:12:35,479] INFO com.google.gerrit.server.plugins.PluginLoader : Loaded plugin download-commands, version v2.11.4
[2015-11-20 08:12:35,603] WARN com.googlesource.gerrit.plugins.replication.ReplicationFileBasedConfig : Config file /home/gerrit2/etc/replication.configdoes not exist; not replicating
[2015-11-20 08:12:35,617] INFO com.google.gerrit.server.plugins.PluginLoader : Loaded plugin replication, version v2.11.4
[2015-11-20 08:12:35,726] INFO com.google.gerrit.server.plugins.PluginLoader : Loaded plugin reviewnotes, version v2.11.4
[2015-11-20 08:12:35,787] INFO com.google.gerrit.server.plugins.PluginLoader : Loaded plugin singleusergroup, version v2.11.4
[2015-11-20 08:12:36,290] ERROR com.google.gerrit.pgm.Daemon : Unable to start daemon
java.lang.IllegalStateException: Cannot bind to *
at com.google.gerrit.sshd.SshDaemon.start(SshDaemon.java:319)
at com.google.gerrit.lifecycle.LifecycleManager.start(LifecycleManager.java:74)
at com.google.gerrit.pgm.Daemon.start(Daemon.java:293)
at com.google.gerrit.pgm.Daemon.run(Daemon.java:205)
at com.google.gerrit.pgm.util.AbstractProgram.main(AbstractProgram.java:64)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.google.gerrit.launcher.GerritLauncher.invokeProgram(GerritLauncher.java:166)
at com.google.gerrit.launcher.GerritLauncher.mainImpl(GerritLauncher.java:93)
at com.google.gerrit.launcher.GerritLauncher.main(GerritLauncher.java:50)
at Main.main(Main.java:25)
Caused by: java.io.IOException: Error while binding on 0.0.0.0/0.0.0.0:22
original message : Permission denied
at org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:238)
at org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:51)
at org.apache.mina.core.polling.AbstractPollingIoAcceptor.registerHandles(AbstractPollingIoAcceptor.java:582)
at org.apache.mina.core.polling.AbstractPollingIoAcceptor.access$400(AbstractPollingIoAcceptor.java:70)
at org.apache.mina.core.polling.AbstractPollingIoAcceptor$Acceptor.run(AbstractPollingIoAcceptor.java:456)
at org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:64)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Gerrit runs its own ssh daemon instance and thus must be given its own port separate than any other sshd instances.
The init failure went away once given a unique port, and then the gerrit web site was accessible.
Bad:
[sshd]
listenAddress = *:22
Good:
[sshd]
listenAddress = *:29418

Related

Error when run bitnami postgres container

it's my first time with docker. I have created a docker container from a bitnami image with this Dockerfile:
FROM bitnami/postgresql:14.1.0
MAINTAINER <whatever>
ADD --chown=1001:1001 main /bitnami/postgresql/data
where main is a copy of my postgresql database /var/lib/postgresql/data
When I try to run the container in this way:
docker run -d --name database -p 5432:5432 -e ALLOW_EMPTY_PASSWORD=yes mypostgres
I get the following error, does anyone know what it is and how to fix it.
postgresql 15:39:28.09
postgresql 15:39:28.09 Welcome to the Bitnami postgresql container
postgresql 15:39:28.09 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-postgresql
postgresql 15:39:28.09 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-postgresql/issues
postgresql 15:39:28.09
postgresql 15:39:28.12 INFO ==> ** Starting PostgreSQL setup **
postgresql 15:39:28.16 INFO ==> Validating settings in POSTGRESQL_* env vars..
postgresql 15:39:28.17 WARN ==> You set the environment variable ALLOW_EMPTY_PASSWORD=yes. For safety reasons, do not use this flag in a production environment.
postgresql 15:39:28.18 INFO ==> Loading custom pre-init scripts...
postgresql 15:39:28.18 INFO ==> Initializing PostgreSQL database...
postgresql 15:39:28.21 INFO ==> pg_hba.conf file not detected. Generating it...
postgresql 15:39:28.22 INFO ==> Generating local authentication configuration
postgresql 15:39:28.23 INFO ==> Deploying PostgreSQL with persisted data...
postgresql 15:39:28.25 INFO ==> Configuring replication parameters
postgresql 15:39:28.30 INFO ==> Configuring fsync
postgresql 15:39:28.34 INFO ==> Loading custom scripts...
postgresql 15:39:28.34 INFO ==> Enabling remote connections
postgresql 15:39:28.37 INFO ==> ** PostgreSQL setup finished! **
postgresql 15:39:28.41 INFO ==> ** Starting PostgreSQL **
2021-12-14 15:39:28.454 GMT [1] LOG: pgaudit extension initialized
2021-12-14 15:39:28.462 GMT [1] FATAL: could not open directory "pg_notify": No such file or directory
2021-12-14 15:39:28.463 GMT [1] LOG: database system is shut down

Not able to run scm_prepare_database.sh script

I am installing Cloudera Manager and CDH 6.3.x and I'm using PostgreSQL for the Cloudera Manager Server and other services that use databases. I'm using a RHEL 7.4 Azure VM and following the official documentation- https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/prepare_cm_database.html
I have completed step 4(Install Databases). In Step 5 : Set up the Cloudera Manager Database, on executing the scm_prepare_database.sh script I'm getting the following error-
[root#machine1 ~]# sudo /opt/cloudera/cm/schema/scm_prepare_database.sh postgresql scm scm
Enter SCM password:
JAVA_HOME=/usr/java/jdk1.8.0_181-cloudera
Verifying that we can write to /etc/cloudera-scm-server
Creating SCM configuration file in /etc/cloudera-scm-server
Executing: /usr/java/jdk1.8.0_181-cloudera/bin/java -cp /usr/share/java/mysql-connector-java.jar:/usr/share/java/oracle-connector-java.jar:/usr/share/java/postgresql-connector-java.jar:/opt/cloudera/cm/schema/../lib/* com.cloudera.enterprise.dbutil.DbCommandExecutor /etc/cloudera-scm-server/db.properties com.cloudera.cmf.db.
Jul 17, 2020 1:59:50 PM org.postgresql.Driver connect
SEVERE: Connection error:
org.postgresql.util.PSQLException: FATAL: Ident authentication failed for user "scm"
at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:438)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:222)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)
at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:194)
at org.postgresql.Driver.makeConnection(Driver.java:450)
at org.postgresql.Driver.connect(Driver.java:252)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:247)
at com.cloudera.enterprise.dbutil.DbCommandExecutor.testDbConnection(DbCommandExecutor.java:263)
at com.cloudera.enterprise.dbutil.DbCommandExecutor.main(DbCommandExecutor.java:139)
[ main] DbCommandExecutor INFO Unable to login using supplied username/password.
[ main] DbCommandExecutor ERROR Error when connecting to database.
org.postgresql.util.PSQLException: FATAL: Ident authentication failed for user "scm"
at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:438)[postgresql-42.1.4.jre7.89c9f79016bab67349a92c00c55907dd.jar:42.1.4.jre7]
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:222)[postgresql-42.1.4.jre7.89c9f79016bab67349a92c00c55907dd.jar:42.1.4.jre7]
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)[postgresql-42.1.4.jre7.89c9f79016bab67349a92c00c55907dd.jar:42.1.4.jre7]
at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:194)[postgresql-42.1.4.jre7.89c9f79016bab67349a92c00c55907dd.jar:42.1.4.jre7]
at org.postgresql.Driver.makeConnection(Driver.java:450)[postgresql-42.1.4.jre7.89c9f79016bab67349a92c00c55907dd.jar:42.1.4.jre7]
at org.postgresql.Driver.connect(Driver.java:252)[postgresql-42.1.4.jre7.89c9f79016bab67349a92c00c55907dd.jar:42.1.4.jre7]
at java.sql.DriverManager.getConnection(DriverManager.java:664)[:1.8.0_181]
at java.sql.DriverManager.getConnection(DriverManager.java:247)[:1.8.0_181]
at com.cloudera.enterprise.dbutil.DbCommandExecutor.testDbConnection(DbCommandExecutor.java:263)[db-common-6.3.1.96818eaab0a222aa84a7854b8d22c0c7.jar:]
at com.cloudera.enterprise.dbutil.DbCommandExecutor.main(DbCommandExecutor.java:139)[db-common-6.3.1.96818eaab0a222aa84a7854b8d22c0c7.jar:]
[ main] DbCommandExecutor ERROR Exiting with exit code 8
--> Error 8, giving up (use --force if you wish to ignore the error)
I had logged into postgres using # sudo -u postgres psql.
PostgreSQL version - PostgreSQL 9.2.23 on x86_64-redhat-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16), 64-bit
Please help. Thanks in advance.

londiste ERROR Node 'slave_IP' already exists

Yesterday, i used londiste for logical replication between PostgreSQL 9.3 and 9.5
Today, i try use londiste again. But i dont deleted node, schema after use londiste yesterday.
List package
yum list skytools*
Installed Packages
skytools-95.x86_64 3.2.6-1.rhel7
skytools-95-modules.x86_64 3.2.6-1.rhel7
Config master
cat /etc/skytools/londiste-master.ini
[londiste3]
job_name = appqueue
db = dbname=my_database host=master_IP
queue_name = appqueue
logfile = /var/log/skytools/master.log
pidfile = /var/run/skytools/master.pid
Config Slave
cat /etc/skytools/londiste-slave.ini
[londiste3]
job_name = appqueue
db = dbname=my_database
queue_name = appqueue
logfile = /var/log/skytools/slave.log
pidfile = /var/run/skytools/slave.pid
You have mail in /var/spool/mail/root
Install scheme
qadmin -h master_IP -U postgres -d my_database -c "install londiste"
INSTALL
Copy DB
pg_dump -h master_IP -s -C -U postgres my_database |psql -U postgres
create-root master_IP
su postgres -c "londiste3 /etc/skytools/londiste-master.ini create-root master_IP 'dbname=my_database host=master_IP'"
2017-10-27 08:36:16,250 20249 INFO plpgsql is installed
2017-10-27 08:36:16,250 20249 INFO pgq is installed
2017-10-27 08:36:16,252 20249 INFO pgq.get_batch_cursor is installed
2017-10-27 08:36:16,252 20249 INFO pgq_ext is installed
2017-10-27 08:36:16,253 20249 INFO pgq_node is installed
2017-10-27 08:36:16,254 20249 INFO londiste is installed
2017-10-27 08:36:16,254 20249 INFO londiste.global_add_table is installed
2017-10-27 08:36:16,262 20249 INFO Node is already initialized as root
create-leaf slave_IP
su postgres -c "londiste3 /etc/skytools/londiste-slave.ini create-leaf slave_IP dbname=my_database --provider='host=master_IP dbname=my_database'"
2017-10-27 08:37:10,984 20414 WARNING No host= in public connect string, bad idea
2017-10-27 08:37:10,991 20414 INFO plpgsql is installed
2017-10-27 08:37:10,992 20414 INFO pgq is installed
2017-10-27 08:37:10,993 20414 INFO pgq.get_batch_cursor is installed
2017-10-27 08:37:10,993 20414 INFO pgq_ext is installed
2017-10-27 08:37:10,994 20414 INFO pgq_node is installed
2017-10-27 08:37:10,994 20414 INFO londiste is installed
2017-10-27 08:37:10,994 20414 INFO londiste.global_add_table is installed
2017-10-27 08:37:11,006 20414 INFO Initializing node
2017-10-27 08:37:11,022 20414 ERROR Node 'slave_IP' already exists
Run londiste3 master and slave worker
su postgres -c "londiste3 -d /etc/skytools/londiste-slave.ini worker"
Ignoring stale pidfile
su postgres -c "londiste3 -d /etc/skytools/londiste-master.ini worker"
Ignoring stale pidfile
Run pgqd
pgqd -d /etc/skytools/pgqd.ini
2017-10-27 08:38:31.638 20659 LOG Starting pgqd 3.2.6
Master status
su postgres -c "londiste3 /etc/skytools/londiste-master.ini status"
Queue: appqueue Local node: master_IP
None (None)
Tables: 0/0/0
Lag: (n/a), NOT UPTODATE
master_IP (root)
Tables: 0/0/0
Lag: 17h56m7s, Tick: 1
Try Delete node:
su postgres -c "londiste3 /etc/skytools/londiste-slave.ini drop-node slave_IP"
2017-10-27 09:20:50,945 29464 ERROR get_node_database: cannot resolve slave_IP
su postgres -c "londiste3 /etc/skytools/londiste-slave.ini drop-node slave_IP dbname=my_database"
2017-10-27 09:21:17,859 29543 ERROR command 'drop-node' got 2 args, but expects 1: node_name
su postgres -c "londiste3 /etc/skytools/londiste-master.ini drop-node master_IP"
2017-10-27 09:24:32,038 30190 ERROR node still has subscribers
How correct fix this error?
Package don`t have application pgqadm.py
You must use the name you gave the node, not its ip.
for example if you create the node
londiste3 slave.ini create-branch slave 'SLAVE_CONN_STR' --provider = 'MASTER_CONN_STR'
you delete the node with
londiste3 slave.ini drop-node slave

sqoop error : password authentication failed

I try to import data from postgres9.3 to hadoop2.7.2 using sqoop1.4.6 on Linux.
When I use the following code, it returns the correct thing.
sqoop-list-databases --connect jdbc:postgresql://localhost:5432/ --username postgres --password "baixinghehe"
The following result shows the username and password are both OK.
Warning: /usr/local/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /usr/local/sqoop/../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
16/07/09 15:39:50 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
16/07/09 15:39:50 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
16/07/09 15:39:50 INFO manager.SqlManager: Using default fetchSize of 1000
template1
template0
postgres
learnflask
When I want to use sqoop import like the following code.
sqoop import --connect jdbc:postgresql://localhost:5432/learnflask --username postgres --password baixinghehe --table employee --target-dir /data/employee -m 1
It runs ok at first, until the map part. Here is the error. I really feel strange because the program should not go to the mapreduce part if the password is wrong. Can someone help? Thanks very much.
Warning: /usr/local/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /usr/local/sqoop/../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
16/07/09 15:35:02 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
16/07/09 15:35:02 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
16/07/09 15:35:02 INFO manager.SqlManager: Using default fetchSize of 1000
16/07/09 15:35:02 INFO tool.CodeGenTool: Beginning code generation
16/07/09 15:35:03 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM "employee" AS t LIMIT 1
16/07/09 15:35:03 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/local/hadoop
注: /tmp/sqoop-chenxuyuan/compile/b54e25a7f52c8eb496b2b84945ecd05a/employee.java.Use replace the old api
Using -Xlint:deprecation to recompile
16/07/09 15:35:06 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-chenxuyuan/compile/b54e25a7f52c8eb496b2b84945ecd05a/employee.jar
16/07/09 15:35:06 WARN manager.PostgresqlManager: It looks like you are importing from postgresql.
16/07/09 15:35:06 WARN manager.PostgresqlManager: This transfer can be faster! Use the --direct
16/07/09 15:35:06 WARN manager.PostgresqlManager: option to exercise a postgresql-specific fast path.
16/07/09 15:35:06 INFO mapreduce.ImportJobBase: Beginning import of employee
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/07/09 15:35:06 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
16/07/09 15:35:07 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
16/07/09 15:35:07 INFO client.RMProxy: Connecting to ResourceManager at cxy10/192.168.0.110:8032
16/07/09 15:35:33 INFO db.DBInputFormat: Using read commited transaction isolation
16/07/09 15:35:33 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN("id"), MAX("id") FROM "employee"
16/07/09 15:35:33 INFO mapreduce.JobSubmitter: number of splits:2
16/07/09 15:35:34 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1468047672859_0003
16/07/09 15:35:35 INFO impl.YarnClientImpl: Submitted application application_1468047672859_0003
16/07/09 15:35:35 INFO mapreduce.Job: The url to track the job: http://cxy10:8088/proxy/application_1468047672859_0003/
16/07/09 15:35:35 INFO mapreduce.Job: Running job: job_1468047672859_0003
16/07/09 15:35:46 INFO mapreduce.Job: Job job_1468047672859_0003 running in uber mode : false
16/07/09 15:35:46 INFO mapreduce.Job: map 0% reduce 0%
16/07/09 15:35:55 INFO mapreduce.Job: Task Id : attempt_1468047672859_0003_m_000000_0, Status : FAILED
Error: java.lang.RuntimeException: java.lang.RuntimeException: org.postgresql.util.PSQLException: FATAL: password authentication failed for user "postgres"
at org.apache.sqoop.mapreduce.db.DBInputFormat.setConf(DBInputFormat.java:167)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:749)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.RuntimeException: org.postgresql.util.PSQLException: FATAL: password authentication failed for user "postgres"
at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:220)
at org.apache.sqoop.mapreduce.db.DBInputFormat.setConf(DBInputFormat.java:165)
... 9 more
Caused by: org.postgresql.util.PSQLException: FATAL: password authentication failed for user "postgres"
at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:415)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:188)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:64)
at org.postgresql.jdbc2.AbstractJdbc2Connection.<init>(AbstractJdbc2Connection.java:143)
at org.postgresql.jdbc3.AbstractJdbc3Connection.<init>(AbstractJdbc3Connection.java:29)
at org.postgresql.jdbc3g.AbstractJdbc3gConnection.<init>(AbstractJdbc3gConnection.java:21)
at org.postgresql.jdbc3g.Jdbc3gConnection.<init>(Jdbc3gConnection.java:24)
at org.postgresql.Driver.makeConnection(Driver.java:412)
at org.postgresql.Driver.connect(Driver.java:280)
at java.sql.DriverManager.getConnection(DriverManager.java:571)
at java.sql.DriverManager.getConnection(DriverManager.java:215)
at org.apache.sqoop.mapreduce.db.DBConfiguration.getConnection(DBConfiguration.java:302)
at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:213)
... 10 more

Docker REST API is not binding on port for Jenkins

I am using :
docker version: 1.11.1, build 5604cbe . I have made entries in /etc/default/docker as follows to configure REST API of docker for jenkins user group :
# Use DOCKER_OPTS to modify the daemon startup options.
#DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4"
DOCKER_OPTS="G- jenkins -H unix://var/run/docker.sock -H tcp://0.0.0.0:9090"
export DOCKER_HOST="tcp://0.0.0.0:9090"
ps- I have also tried with 127.0.0.1
Then I did sudo service docker restart
command $ ps aux|grep docker returned:
root 12385 0.0 0.2 421840 36016 ? Ssl 19:21 0:00 /usr/bin/docker daemon -H fd://
root 12391 0.0 0.0 294652 12188 ? Ssl 19:21 0:00 docker-containerd -l /var/run/docker/libcontainerd/docker-containerd.sock --runtime docker-runc
root 12654 0.0 0.0 21296 1028 pts/1 S+ 19:28 0:00 grep --color=auto docker
Seems like The REST API is not getting bind to port:9090.
Then I am using Jenkins Docker build step plugin to connect with docker REST API. It returns following :
Building in workspace /var/lib/jenkins/jobs/Telco_automated_build/workspace
[Docker] INFO: Pulling image registry.hub.docker.com/pratyush/product:latest
ERROR: Build step failed with exception
javax.ws.rs.ProcessingException: org.apache.http.conn.HttpHostConnectException: Connect to 127.0.0.1:9090 [/127.0.0.1] failed: Connection refused
at org.glassfish.jersey.apache.connector.ApacheConnector.apply(ApacheConnector.java:513)
at org.glassfish.jersey.client.ClientRuntime.invoke(ClientRuntime.java:246)
at org.glassfish.jersey.client.JerseyInvocation$1.call(JerseyInvocation.java:667)
at org.glassfish.jersey.client.JerseyInvocation$1.call(JerseyInvocation.java:664)
at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
at org.glassfish.jersey.internal.Errors.process(Errors.java:228)
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:424)
at org.glassfish.jersey.client.JerseyInvocation.invoke(JerseyInvocation.java:664)
at org.glassfish.jersey.client.JerseyInvocation$Builder.method(JerseyInvocation.java:424)
at org.glassfish.jersey.client.JerseyInvocation$Builder.post(JerseyInvocation.java:333)
at com.github.dockerjava.jaxrs.PullImageCmdExec.execute(PullImageCmdExec.java:37)
at com.github.dockerjava.jaxrs.PullImageCmdExec.execute(PullImageCmdExec.java:17)
at com.github.dockerjava.jaxrs.AbstrDockerCmdExec.exec(AbstrDockerCmdExec.java:57)
at com.github.dockerjava.core.command.AbstrDockerCmd.exec(AbstrDockerCmd.java:29)
at com.github.dockerjava.core.command.PullImageCmdImpl.exec(PullImageCmdImpl.java:15)
at org.jenkinsci.plugins.dockerbuildstep.cmd.PullImageCommand.execute(PullImageCommand.java:75)
at org.jenkinsci.plugins.dockerbuildstep.DockerBuilder.perform(DockerBuilder.java:75)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
At the jenkins global setting when i hit test-connection it returns:
Something went wrong, cannot connect to http://127.0.0.1:9090/, cause: org.apache.http.conn.HttpHostConnectException: Connect to 127.0.0.1:9090 [/127.0.0.1] failed: Connection refused
PS- I have restarted Jenkins server after changing global setting.
Any help, where am I missing ?
Ubuntu 16.04 uses systemd now I believe. In which case the docker daemon arguments are not set using /etc/default/docker. You can see they're not being picked up in the output of your $ ps aux|grep docker.
Instead you need to follow the instructions to set daemon args in systemd-based setups.