Connection With Azure to MongoDB - mongodb

I am trying to connect Azure Databricks with MongoDB, but I am getting an error message which I am not able to resolve.
I am getting the following error
com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=sopt-bo-halo-01.northeurope.cloudapp.azure.com:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketWriteException: Exception sending message}, caused by {javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target}, caused by {sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target}, caused by {sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target}}, {address=sopt-bo-halo-03.northeurope.cloudapp.azure.com:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketWriteException: Exception sending message}, caused by {javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target}, caused by {sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target}, caused by {sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target}}, {address=sopt-bo-halo-02.northeurope.cloudapp.azure.com:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketWriteException: Exception sending message}, caused by {javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target}, caused by {sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target}, caused by {sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target}}]
from pyspark.sql import SparkSession
database = "prod-backoffice"
collection = "AmazonRegion"
connectionString = ‘<Username>://<Password> #hostN
/defaultdb?
ssl=true&readPreference=primary&maxIdleTimeMS=60000&connectTimeoutMS=10000&
authSource=DBNAME &authMechanism=SCRAM-SHA-1&
tlsAllowInvalidCertificates=true&tlsAllowInvalidHostnames=true
&sslAllowConnectionsWithoutCertificates=true&sslInvalidHostNameAllowed=true'
spark = SparkSession.builder\
.appName("MongoSparkConnectorIntro")\
.config('spark.mongodb.input.uri',connectionString)\
.config('spark.mongodb.output.uri', connectionString)\
.config('spark.jars.packages', 'org.mongodb.spark:mongo-spark-connector_2.12:3.0.1')\
.getOrCreate()
df = spark.read\
.format("com.mongodb.spark.sql.DefaultSource")\
.option("uri", connectionString)\
.option("database", database)\
.option("collection", collection)\
.load()

Connect MongoDB Atlas with DataBricks
Connection with databricks
Connect and Enable Databricks clusters to the cluster by adding the external IP addresses for the Databricks cluster nodes to the whitelist in Atlas.
Access on MongoDB Network and add the Databrick cluster IP address.
Configure Databricks Cluster with MongoDB Connection URI
Get the MongoDB connection URI. click the cluster you have created in the MongoDB Atlas UI.
Click the Connect button.
Click Connect Your Application.
According to your Databrick MongoDB connector configuration make sure your scala and spark version
Copy the generated connection string. It should look like mongodb+srv://:#Firstdatabase-wlcof.azure.mongodb.net/test?retryWrites=true
Replace password and Firstdatabase name with your password and database name.
Configuration in DataBricks
METHOD 1
In your databricks cluster, select the configuration tab and click the edit button -> under advanced options, Please select the spark configuration tab using the connection string and update the spark config. Follow the below format in the config tab
spark.mongodb.output.uri<connection-string>
spark.mongodb.input.uri<connection-string>
METHOD 2
In python notebook configure settings directly using below code.
from pyspark.sql import SparkSession
database = "<datanase_name>" #your database name
collection = "millionsongs" #your collection name
connectionString= copy your connection string here ('mongodb+srv://user:<password>#cluster0.9rvsi.mongodb.net/<database>?retryWrites=true&w=majority')
spark = SparkSession
.builder
.config('spark.mongodb.input.uri',connectionString)
.config('spark.mongodb.input.uri', connectionString)
.config('spark.jars.packages', 'org.mongodb.spark:mongo-spark-connector_2.12:3.0.1')
.getOrCreate()
#Reading from MongoDB
df = spark.read
.format("com.mongodb.spark.sql.DefaultSource")
.option("uri", connectionString)
.option("database", database)
.option("collection", collection)
.load()

Related

Unable to connect from Dataflow job to Schema Registry when Schema Registry requires TLS client authentication

I am developing a GCP Cloud Dataflow job that use Kafka broker and Schema Registry.
Our Kafka broker and Schema Registry requires TLS client certificate.
And I am facing connection issue with Schema Registry on deployment.
Any suggestion is highly welcomed.
Here is what I do for the Dataflow job.
I create Consumer Properties for TLS configurations.
props.put("security.protocol", "SSL");
props.put("ssl.truststore.password", "aaa");
props.put("ssl.keystore.password", "bbb");
props.put("ssl.key.password", "ccc"));
props.put("schema.registry.url", "https://host:port")
props.put("specific.avro.reader", true);
And update Consumer Properties by updateConsumerProperties.
Pipeline p = Pipeline.create(options)
...
.updateConsumerProperties(properties)
...
As this stackoverflow answer suggests, I also download keyStore and trustStore to local directory and specify trustStore / keyStore location on ConsumerProperties in ConsumerFactory.
Truststore and Google Cloud Dataflow
Pipeline p = Pipeline.create(options)
...
.withConsumerFactoryFn(new MyConsumerFactory(...))
...
In ConsumerFactory:
public Consumer<byte[], byte[]> apply(Map<String, Object> config) {
// download keyStore and trustStore from GCS bucket
config.put("ssl.truststore.location", (Object)localTrustStoreFilePath)
config.put("ssl.keystore.location", (Object)localKeyStoreFilePath)
new KafkaConsumer<byte[], byte[]>(config);
}
With this code I succeeded in deployment but the Dataflow job got TLS server certificate verification error.
Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:387)
sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:292)
sun.security.validator.Validator.validate(Validator.java:260)
sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1513)
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1441)
java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:338)
io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:208)
io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:252)
io.confluent.kafka.schemaregistry.client.rest.RestService.getId(RestService.java:482)
io.confluent.kafka.schemaregistry.client.rest.RestService.getId(RestService.java:475)
io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getSchemaByIdFromRegistry(CachedSchemaRegistryClient.java:151)
io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getBySubjectAndId(CachedSchemaRegistryClient.java:230)
io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getById(CachedSchemaRegistryClient.java:209)
io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserialize(AbstractKafkaAvroDeserializer.java:116)
io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserialize(AbstractKafkaAvroDeserializer.java:88)
org.fastretailing.rfid.store.siv.EPCTransactionKafkaAvroDeserializer.deserialize(EPCTransactionKafkaAvroDeserializer.scala:14)
org.fastretailing.rfid.store.siv.EPCTransactionKafkaAvroDeserializer.deserialize(EPCTransactionKafkaAvroDeserializer.scala:7)
org.apache.beam.sdk.io.kafka.KafkaUnboundedReader.advance(KafkaUnboundedReader.java:234)
org.apache.beam.sdk.io.kafka.KafkaUnboundedReader.start(KafkaUnboundedReader.java:176)
org.apache.beam.runners.dataflow.worker.WorkerCustomSources$UnboundedReaderIterator.start(WorkerCustomSources.java:779)
org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.start(ReadOperation.java:361)
org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:194)
org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:159)
org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:76)
org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.process(StreamingDataflowWorker.java:1228)
org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.access$1000(StreamingDataflowWorker.java:143)
org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker$6.run(StreamingDataflowWorker.java:967)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
Then I found that Schema Registry client load TLS configurations from system property.
https://github.com/confluentinc/schema-registry/issues/943
I tested Kafka Consumer with the same configuration, and I confirmed it works fine.
props.put("schema.registry.url", "https://host:port")
props.put("specific.avro.reader", true);
props.put("ssl.truststore.location", System.getProperty("javax.net.ssl.trustStore"));
props.put("ssl.truststore.password", System.getProperty("javax.net.ssl.keyStore"));
props.put("ssl.keystore.location", System.getProperty("javax.net.ssl.keyStore"));
props.put("ssl.keystore.password", System.getProperty("javax.net.ssl.keyStorePassword"));
props.put("ssl.key.password", System.getProperty("javax.net.ssl.key.password"));
Next I applied the same approach, which means apply the same TLS configurations to system properties and Consumer Properties, to Dataflow job code.
I specified password by system properties when executing application.
-Djavax.net.ssl.keyStorePassword=aaa \
-Djavax.net.ssl.key.password=bbb \
-Djavax.net.ssl.trustStorePassword=ccc \
Note: I set system property for trustStore and keyStore location in Consumer Factory since those files are downloaded to local temp directory.
config.put("ssl.truststore.location", (Object)localTrustStoreFilePath)
config.put("ssl.keystore.location", (Object)localKeyStoreFilePath)
System.setProperty("javax.net.ssl.trustStore", localTrustStoreFilePath)
System.setProperty("javax.net.ssl.keyStore", localKeyStoreFilePath)
but even deployment was failed with timeout error.
Exception in thread "main" java.lang.RuntimeException: Failed to construct instance from factory method DataflowRunner#fromOptions(interface org.apache.beam.sdk.options.PipelineOptions)
at org.apache.beam.sdk.util.InstanceBuilder.buildFromMethod(InstanceBuilder.java:224)
...
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
...
Caused by: java.lang.IllegalArgumentException: DataflowRunner requires gcpTempLocation, but failed to retrieve a value from PipelineOptions
at org.apache.beam.runners.dataflow.DataflowRunner.fromOptions(DataflowRunner.java:246)
Caused by: java.lang.IllegalArgumentException: Error constructing default value for gcpTempLocation: tempLocation is not a valid GCS path, gs://dev-k8s-rfid-store-dataflow/rfid-store-siv-epc-transactions-to-bq/tmp.
at org.apache.beam.sdk.extensions.gcp.options.GcpOptions$GcpTempLocationFactory.create(GcpOptions.java:255)
...
Caused by: java.lang.RuntimeException: Unable to verify that GCS bucket gs://dev-k8s-rfid-store-dataflow exists.
at org.apache.beam.sdk.extensions.gcp.storage.GcsPathValidator.verifyPathIsAccessible(GcsPathValidator.java:86)
...
Caused by: java.io.IOException: Error getting access token for service account: java.security.NoSuchAlgorithmException: Error constructing implementation (algorithm: Default, provider: SunJSSE, class: sun.security.ssl.SSLContextImpl$DefaultSSLContext)
at com.google.auth.oauth2.ServiceAccountCredentials.refreshAccessToken(ServiceAccountCredentials.java:401)
...
Caused by: java.net.SocketException: java.security.NoSuchAlgorithmException: Error constructing implementation (algorithm: Default, provider: SunJSSE, class: sun.security.ssl.SSLContextImpl$DefaultSSLContext)
at javax.net.ssl.DefaultSSLSocketFactory.throwException(SSLSocketFactory.java:248)
...
Caused by: java.security.NoSuchAlgorithmException: Error constructing implementation (algorithm: Default, provider: SunJSSE, class: sun.security.ssl.SSLContextImpl$DefaultSSLContext)
at java.security.Provider$Service.newInstance(Provider.java:1617)
...
Caused by: java.io.IOException: Keystore was tampered with, or password was incorrect
at sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:780)
Caused by: java.security.UnrecoverableKeyException: Password verification failed
at sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:778)
Am I missing something?
In the ConsumerFactoryFn, you need to copy the certificate from some location (such as GCS) to a local file path on the machine.
In Truststore and Google Cloud Dataflow, the ConsumerFnFactory that the user writes has this snippet of code which fetches the truststore from GCS:
Storage storage = StorageOptions.newBuilder()
.setProjectId("prj-id-of-your-bucket")
.setCredentials(GoogleCredentials.getApplicationDefault())
.build()
.getService();
Blob blob = storage.get("your-bucket-name", "pth.to.your.kafka.client.truststore.jks");
ReadChannel readChannel = blob.reader();
FileOutputStream fileOuputStream;
fileOuputStream = new FileOutputStream("/tmp/kafka.client.truststore.jks"); //path where the jks file will be stored
fileOuputStream.getChannel().transferFrom(readChannel, 0, Long.MAX_VALUE);
fileOuputStream.close();
File f = new File("/tmp/kafka.client.truststore.jks"); //assuring the store file exists
if (f.exists())
{
LOG.debug("key exists");
}
else
{
LOG.error("key does not exist");
}
You'll need to do something similar (it doesn't have to be GCS but it does need to be accessible from all VMs executing your pipeline on Google Cloud Dataflow).
I got reply from GCP support. It seems that Apache Beam does not support Schema Registry.
Hello,
the Dataflow specialist has reached me back. I will now expose what they have told me.
The answer to your question is no, Apache Beam does not support Schema Registry.
However, they have told me that you could implement the calls to Schema Registry
by yourself as Beam only consumes raw messages and it is user's responsibility to do
whatever they need with the data.
This is based on our understanding of the case that you want to publish messages to Kafka,
and have DF consume those messages, parsing them based on the schema from the registry.
I hope this information can be useful to you, let me know if I can be of further help.
But Dataflow job can still receive Avro format binary message. So you internally internally call Schema Registry REST API as follows.
https://stackoverflow.com/a/55917157

How to connect xmpp server using TSL/SSL?

i am using smack 4.2.1 to connect xmpp server ,but when i run the code ,the server response the message below.
i really know that the error caused by tsl/ssl config.but i dont know how to solve.
XMPPTCPConnectionConfiguration conf = XMPPTCPConnectionConfiguration.builder()
.setXmppDomain("404.city").setUsernameAndPassword("xx", "xxxx")
.setCompressionEnabled(false)
.setSecurityMode(ConnectionConfiguration.SecurityMode.required)
.build();
XMPPTCPConnection connection = new XMPPTCPConnection(conf);
connection.connect();
org.jivesoftware.smack.SmackException: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at org.jivesoftware.smack.tcp.XMPPTCPConnection$PacketReader.parsePackets(XMPPTCPConnection.java:1060)
at org.jivesoftware.smack.tcp.XMPPTCPConnection$PacketReader.access$300(XMPPTCPConnection.java:982)
at org.jivesoftware.smack.tcp.XMPPTCPConnection$PacketReader$1.run(XMPPTCPConnection.java:998)
at java.lang.Thread.run(Thread.java:745)
i fix it by myself....
i should set a SSLContext to the config

Remote Invocation of EJB in WildFly 10 using JNDI lookup

Im trying to invoke an EJB from a remote server using JNDI lookup, Im using EJB3 with Spring-MVC in WildFly 10 and the configuration guided in this documentation has been done in my client and remote server
https://docs.jboss.org/author/display/WFLY10/EJB+invocations+from+a+remote+client+using+JNDI
But still I'm not able to get the connection of remote server.
1) Created a user under ApplicationRealm and gave the permissions for master slave setup for remote EJB Invocation.
2) This is my jboss-ejb-client.properties file, Here I have given the wildfly User_Name and Password of Host server.
endpoint.name=client-endpoint
remote.connections=one, two
remote.connection.one.host=172.16.25.26
remote.connection.one.port=8080
remote.connection.one.username=ABCD
remote.connection.one.password=ABCD#123
remote.connection.two.host=localhost
remote.connection.two.port=8080
remote.connection.two.username=guest
remote.connection.two.username=guest
# org.jboss.as.logging.per-deployment=true
My exception is
javax.naming.AuthenticationException: Failed to connect to any server. Servers tried:
[http-remoting://172.16.25.26:8080 (Authentication failed: all available authentication mechanisms failed:
JBOSS-LOCAL-USER: javax.security.sasl.SaslException: Failed to read server challenge [Caused by
java.io.FileNotFoundException: D:\wildfly-10.0.0.Final\standalone\tmp\auth\local3540175271681581878.challenge
(The system cannot find the file specified)]
DIGEST-MD5: javax.security.sasl.SaslException: DIGEST-MD5: Cannot perform callback to acquire realm,
authentication ID or password [Caused by javax.security.auth.callback.UnsupportedCallbackException])]
[Root exception is javax.security.sasl.SaslException: Authentication failed: all available authentication
mechanisms failed:
Please tell me what am I missing here thats causing this exception and what is the significance of secret-key generated while creating the user in wildfly and where to configure that key

How to configure PostgreSQL database for deploying alfresco on tomcat 8?

I have built alfresco(version 5.2) from source on ubuntu 16.04. I want to deploy alfresco on tomcat 8. The deployment is successful however the PostgreSQL database is not getting configured as required. I have followed the steps as given in http://docs.alfresco.com/5.1/tasks/postgresql-config.html
I observe the home page as given in image alfresco_page
Am I missing onto something here that the PostgreSQL database is not getting configured. Is there any other configuration that needs to be done that I have missed ?
UPDATE
The alfresco.log gave me this
2017-08-01 05:53:54,406 WARN [org.alfresco.web.scripts.servlet.X509ServletFilterBase] [localhost-startStop-1] clientAuth does not appear to be set for Tomcat. clientAuth must be set to 'want' for X509 Authentication
2017-08-01 05:53:54,416 WARN [org.alfresco.web.scripts.servlet.X509ServletFilterBase] [localhost-startStop-1] Attempting to set clientAuth=want through JMX...
2017-08-01 05:53:54,427 WARN [org.alfresco.web.scripts.servlet.X509ServletFilterBase] [localhost-startStop-1] Unable to set clientAuth=want through JMX.
2017-08-01 05:53:55,139 ERROR [org.apache.solr.core.CoreContainer] [coreLoadExecutor-5-thread-1] Error creating core [collection1]: Could not load conf for core collection1: Error loading solr config from solr/collection1/conf/solrconfig.xml
org.apache.solr.common.SolrException: Could not load conf for core collection1: Error loading solr config from solr/collection1/conf/solrconfig.xml
at org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:66)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:489)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:255)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:249)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.solr.common.SolrException: Error loading solr config from solr/collection1/conf/solrconfig.xml
at org.apache.solr.core.SolrConfig.readFromResourceLoader(SolrConfig.java:154)
at org.apache.solr.core.ConfigSetService.createSolrConfig(ConfigSetService.java:80)
at org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:61)
... 7 more
Caused by: java.io.IOException: Can't find resource 'solrconfig.xml' in classpath or '/root/tomcat85/output/build/webapps/solr/collection1/conf'
at org.apache.solr.core.SolrResourceLoader.openResource(SolrResourceLoader.java:362)
at org.apache.solr.core.SolrResourceLoader.openConfig(SolrResourceLoader.java:308)
at org.apache.solr.core.Config.<init>(Config.java:117)
at org.apache.solr.core.Config.<init>(Config.java:87)
at org.apache.solr.core.SolrConfig.<init>(SolrConfig.java:167)
at org.apache.solr.core.SolrConfig.readFromResourceLoader(SolrConfig.java:145)
... 9 more
2017-08-01 05:54:09,634 WARN [org.hibernate.cfg.SettingsFactory] [localhost-startStop-1] Could not obtain connection metadata
org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (Connection refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.)
at org.apache.commons.dbcp.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:1549)
at org.apache.commons.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1388)
at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)
at org.springframework.orm.hibernate3.LocalDataSourceConnectionProvider.getConnection(LocalDataSourceConnectionProvider.java:83)
at org.hibernate.cfg.SettingsFactory.buildSettings(SettingsFactory.java:84)
at org.hibernate.cfg.Configuration.buildSettings(Configuration.java:2079)
at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1304)
at org.springframework.orm.hibernate3.LocalSessionFactoryBean.newSessionFactory(LocalSessionFactoryBean.java:863)
at org.springframework.orm.hibernate3.LocalSessionFactoryBean.buildSessionFactory(LocalSessionFactoryBean.java:782)
at org.springframework.orm.hibernate3.AbstractSessionFactoryBean.afterPropertiesSet(AbstractSessionFactoryBean.java:188)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1573)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1511)
Things to check:
Is postgres running (ps -ef|grep postgres)?
Can you use psql to connect to postgres using the db.name, db.username, and db.password that are configured in alfresco-global.properties?
Did you follow the step in the docs about editing pg_hba.conf to make sure that postgres is configured to allow password based authentication?
Also, it is exceedingly rare to need to build Alfresco from source unless you are making changes to the low-level classes themselves, which is not recommended.

Connecting to secured remote AEM repository

In order to create JCR nodes, We are trying to programmatically connect to a remote AEM instance using the JcrUtils.getRepository(...) method to acquire the handle to the repository instance.
This instance is secured and checks for a cookie in the request to let the user in.
Is there a way to pass the cookie to JcrUtils (or other methods of connecting to an AEM repository)?
Right now, when running the code JcrUtils.getRepository("http://host:port/crx/server"); it just throws the following exception:
javax.jcr.RepositoryException: Unable to access a repository with the following settings:
org.apache.jackrabbit.repository.uri: https://<host>:<port>/crx/server
The following RepositoryFactory classes were consulted:
org.apache.jackrabbit.jcr2dav.Jcr2davRepositoryFactory: declined
org.apache.jackrabbit.jcr2spi.Jcr2spiRepositoryFactory: declined
org.apache.jackrabbit.commons.JndiRepositoryFactory: declined
org.apache.jackrabbit.core.RepositoryFactoryImpl: declined
org.apache.jackrabbit.rmi.repository.RmiRepositoryFactory: failed
because of RemoteRuntimeException: java.rmi.RemoteException: Failed to read the resource at URL https://<host>:<port>/crx/server; nested exception is:
java.io.StreamCorruptedException: invalid stream header: 3C21444F
because of RemoteException: Failed to read the resource at URL https://<host>:<port>/crx/server; nested exception is:
java.io.StreamCorruptedException: invalid stream header: 3C21444F
because of StreamCorruptedException: invalid stream header: 3C21444F
Perhaps the repository you are trying to access is not available at the moment.
at org.apache.jackrabbit.commons.JcrUtils.getRepository(JcrUtils.java:223)
at org.apache.jackrabbit.commons.JcrUtils.getRepository(JcrUtils.java:263)
...
There is no way to pass cookie via JcrUtils.getRepository(...) it accepts URI as a string.
In your logs looks like org.apache.jackrabbit.jcr2dav.Jcr2davRepositoryFactory doesn't exist in application classpath.
Make sure you added to dependencies following libs
jackrabbit-jcr-commons
jackrabbit-jcr2dav
in case you are using Maven:
<dependency>
<groupId>org.apache.jackrabbit</groupId>
<artifactId>jackrabbit-jcr-commons</artifactId>
<version>2.10.1</version>
</dependency>
<dependency>
<groupId>org.apache.jackrabbit</groupId>
<artifactId>jackrabbit-jcr2dav</artifactId>
<version>2.10.1</version>
</dependency>
I am getting similar issue as #user3239244 but only in case of https.
I am trying to access repository from standalone java application
repository = JcrUtils.getRepository(url)
Works for http and fails for https
From logs:
javax.jcr.RepositoryException: Unable to access a repository with the
following settings:
org.apache.jackrabbit.repository.uri: https://localhost:5433/crx/server The following RepositoryFactory
classes were consulted:
org.apache.jackrabbit.jcr2dav.Jcr2davRepositoryFactory: declined
org.apache.jackrabbit.jcr2spi.Jcr2spiRepositoryFactory: declined
org.apache.jackrabbit.commons.JndiRepositoryFactory: declined
org.apache.jackrabbit.core.RepositoryFactoryImpl: declined
org.apache.jackrabbit.rmi.repository.RmiRepositoryFactory: failed
because of RepositoryException: Failed to read the resource at URL https://localhost:5433/crx/server
because of SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed:
sun.security.provider.certpath.SunCertPathBuilderException: unable to
find valid certification path to requested target
because of ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to
find valid certification path to requested target
because of SunCertPathBuilderException: unable to find valid certification path to requested target