Can spring data couchbase be used to access reduced views - spring-data

I know there is a way to access reduced view results using the Couchbase java sdk. What I am currently unable to do is use spring-data to access the reduced view. Is this possible?
view:
View byIdGetDateStats = DefaultView.create("byIdGetDateStats",
"function (doc, meta) {"
+ "if(doc.docType == \"com.bd.idm.model.DayLog\"){"
+ " emit(doc.userid, doc.date)"
+" }}"
,"_stats"
);
When I attempt to use spring-data to access the view like this:
Query query = new Query();
query.setKey(ComplexKey.of(userid)).setReduce(true).setGroup(true);
ViewResult result = repo.findByIdGetDateStats(query);
Error Message
java.lang.IllegalStateException: Failed to execute CommandLineRunner
...
Caused by: java.lang.RuntimeException: Failed to access the view
...
Caused by: java.util.concurrent.ExecutionException: OperationException: SERVER: query_parse_error Reason: Invalid URL parameter 'group' or 'group_level' for non-reduce view.
....
Caused by: com.couchbase.client.protocol.views.ViewException: query_parse_error Reason: Invalid URL parameter 'group' or 'group_level' for non-reduce view.
....

No, Spring-Data-Couchbase 1.x will always set reduce to false.
If you want to use a reduced view and get the ViewResult, use the CouchbaseTemplate#queryView() method.
You can still do so in a repository by defining custom repository method (see this chapter of the doc, you should be able to call getCouchbaseOperations in your implementation).

Related

Springboot application.properties for postgres not working - FATAL: database "null" does not exist

The problem I am experiecning is that the auto configuration from Springboot version: 1.5.9.RELEASE seems not to be effective.
Note, I probably will be able to overcome my current problem if I define a bean that returns out a DataSource bean.
Probably the DataSource bean could look very similar to the:
Example in the following snippet:
https://www.atomikos.com/Documentation/ConfiguringPostgreSQL
I am switching my application.properties configuration to stop using the default H2 database used by spring boot and switching it to postgres.
The configuration snippet illustrated in the following page:
https://dzone.com/articles/configuring-spring-boot-for-postgresql
Is definitely not working.
Here is what my configuration looks like:
spring.datasource.url=jdbc:postgresql://localhost:5432/dummydb
spring.datasource.username=DUMMYDB
spring.datasource.password=DUMMYDB
spring.jpa.hibernate.ddl-auto=create-drop
This does not work, and why?
Apparently for a very simple reason.
While the user name and password are definitely of use and getting transferred over at the time that the connection factory tries to build a connection for the database.
The URL attribute is of no use when it comes to configuring the data source.
Let me start by giving you the final stack trace:
Caused by: org.postgresql.util.PSQLException: FATAL: database "null" does not exist
at org.postgresql.core.v3.ConnectionFactoryImpl.readStartupMessages(ConnectionFactoryImpl.java:469) ~[postgresql-9.1-901.jdbc4.jar:na]
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:112) ~[postgresql-9.1-901.jdbc4.jar:na]
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:66) ~[postgresql-9.1-901.jdbc4.jar:na]
at org.postgresql.jdbc2.AbstractJdbc2Connection.<init>(AbstractJdbc2Connection.java:125) ~[postgresql-9.1-901.jdbc4.jar:na]
at org.postgresql.jdbc3.AbstractJdbc3Connection.<init>(AbstractJdbc3Connection.java:30) ~[postgresql-9.1-901.jdbc4.jar:na]
at org.postgresql.jdbc3g.AbstractJdbc3gConnection.<init>(AbstractJdbc3gConnection.java:22) ~[postgresql-9.1-901.jdbc4.jar:na]
at org.postgresql.jdbc4.AbstractJdbc4Connection.<init>(AbstractJdbc4Connection.java:30) ~[postgresql-9.1-901.jdbc4.jar:na]
at org.postgresql.jdbc4.Jdbc4Connection.<init>(Jdbc4Connection.java:24) ~[postgresql-9.1-901.jdbc4.jar:na]
at org.postgresql.Driver.makeConnection(Driver.java:393) ~[postgresql-9.1-901.jdbc4.jar:na]
at org.postgresql.Driver.connect(Driver.java:267) ~[postgresql-9.1-901.jdbc4.jar:na]
at java.sql.DriverManager.getConnection(DriverManager.java:664) ~[na:1.8.0_112]
at java.sql.DriverManager.getConnection(DriverManager.java:247) ~[na:1.8.0_112]
at org.postgresql.ds.common.BaseDataSource.getConnection(BaseDataSource.java:91) ~[postgresql-9.1-901.jdbc4.jar:na]
at org.postgresql.xa.PGXADataSource.getXAConnection(PGXADataSource.java:47) ~[postgresql-9.1-901.jdbc4.jar:na]
at org.postgresql.xa.PGXADataSource.getXAConnection(PGXADataSource.java:32) ~[postgresql-9.1-901.jdbc4.jar:na]
at com.atomikos.jdbc.AtomikosXAConnectionFactory.createPooledConnection(AtomikosXAConnectionFactory.java:60) ~[transactions-jdbc-3.9.3.jar:na]
... 44 common frames omitted
Why this does not work is because the postgres base data source does not care at all for being configured via URL.
Rather it wants to be configured via setters that pass over:
databaseName, port, etc...
Please have a look at the following class:
org.postgresql.ds.common.BaseDataSource
This class provides multiple setters. For username, for password etcc..
But there is no setter for an URL. The individual elements that compose the URL such as the database name need to go into the dedicated fields.
So instead what the class offers is this getURL() method that is retuning the strange URL where the datbase is null.
Here is a code snippet for the method.
/**
* Generates a DriverManager URL from the other properties supplied.
*/
private String getUrl()
{
StringBuffer sb = new StringBuffer(100);
sb.append("jdbc:postgresql://");
sb.append(serverName);
if (portNumber != 0) {
sb.append(":").append(portNumber);
}
sb.append("/").append(databaseName);
sb.append("?loginTimeout=").append(loginTimeout);
sb.append("&socketTimeout=").append(socketTimeout);
sb.append("&prepareThreshold=").append(prepareThreshold);
sb.append("&unknownLength=").append(unknownLength);
sb.append("&loglevel=").append(logLevel);
if (protocolVersion != 0) {
sb.append("&protocolVersion=").append(protocolVersion);
}
if (ssl) {
sb.append("&ssl=true");
if (sslfactory != null) {
sb.append("&sslfactory=").append(sslfactory);
}
}
sb.append("&tcpkeepalive=").append(tcpKeepAlive);
if (compatible != null) {
sb.append("&compatible="+compatible);
}
if (applicationName != null) {
sb.append("&ApplicationName=");
sb.append(applicationName);
}
return sb.toString();
}
So then the question for me was.
Ok... if the postgres data source does not wished to be configured via url but rather via the various different setters, perhaps I can get away by adding to the application.properties file fields such as:
#spring.datasource.databaseName=dummydb
Nope, it turns out this seems to be of no use at all.
After some debugging, I a managed to find out this spring boot auto configuration class:
org.springframework.boot.autoconfigure.jdbc.XADataSourceAutoConfiguration
The above class is very interesting, since it is at the route of the configuration of the atomics JTA transaction subsytem.
IT will setup the data source:
org.postgresql.xa.PGXADataSource
But the only problem is that this auto configurator only cares for three attributes in our application.properties file.
See the code snippet bellow:
private void bindXaProperties(XADataSource target, DataSourceProperties properties) {
MutablePropertyValues values = new MutablePropertyValues();
values.add("user", this.properties.determineUsername());
values.add("password", this.properties.determinePassword());
values.add("url", this.properties.determineUrl());
values.addPropertyValues(properties.getXa().getProperties());
new RelaxedDataBinder(target).withAlias("user", "username").bind(values);
}
As you can see, it appears that I am "toasted".
The auto configurator has no notion of any other data source properties that could be relevant.
Which I find surprising to say the least.
I was expecting some sort of class that would not have any thing such as hard-coded subset of properties such as user name and password.
I was expecting some sort of code that would trivially be looping all of the applicaton.properties and hunting for any property whose key would start with:
spring.datasource.whatever
And call the datasource.setWhatever on the data source class.
Seems not to be the case.
So I am certain I must be messing up something.
I cannot believe that springboot would not be able to configure postgres database without forcing a developer to create his own datasource bean.
Postgress is simply too main stream...
So I was wondering if someone could help me figure out how to properly configure the application.properties file to ensure that indeed I can connect to postgres with springboot.
Meanwhile, I will see if I acn work-around this problem by creating programtically a class like:
#AutoConfigureBefore(DataSourceAutoConfiguration.class)
#EnableConfigurationProperties(DataSourceProperties.class)
#ConditionalOnClass({ DataSource.class, TransactionManager.class,
EmbeddedDatabaseType.class })
#ConditionalOnBean(XADataSourceWrapper.class)
#ConditionalOnMissingBean(DataSource.class)
public class XADataSourceAutoConfiguration implements BeanClassLoaderAware {
But in my case the class would know how to programatically build the postgres data source with all necessary properties to connect.
Many thanks for the help.

Wildfly 10 in cluster tries to serialize JSP with org.infinispan.commons.marshall.NotSerializableException

I'm trying to use my application with following code in JPS
<c:forEach var="area" items="#{MissingSearchBean.workingAreas}">
<h:commandButton value="#{area.workingAreaName}(#{area.count})"
action="#{MissingSearchBean.selectWorkingArea(area.workingAreaName)}"
styleClass="commandButton" />
</c:forEach>
inside wilfly 10. Everything works fine, but when I open view, containing code above I see following error in logs:
Caused by: org.infinispan.commons.marshall.NotSerializableException: javax.servlet.jsp.jstl.core.IteratedExpression
Caused by: an exception which occurred:
in field iteratedExpression
in field delegate
in field savedState
in field m
in object java.util.HashMap#85e645ff
in object org.wildfly.clustering.marshalling.jboss.SimpleMarshalledValue#85e645ff
I think that wildfly tries to persist view into infinispan to be able to recover view in case if I'll reload page or hit this page on another node.
I've tried to change scope of bean to request and even to none, but wildfly still tries to serialize view. I'm absolutely sure that the problem is in c:forEach because when I comment it (ot its' content) out — I don't get any exceptions.
Also obviously IteratedExpression contains Iterator inside, which is not Serializable true.
I'm looking for any solution/workaround for this be able to work in cluster without throwing exceptions.
The problem is c:forEach creates IteratedValueExpression, which is not Serializable because contains the Iterator inside. Simple workaround for this is to change return type of MissingSearchBean.workingAreas to array.
In case when value is represented by array, LoopTagSupport creates IndexedValueExpression instead of IteratedValueExpression and this is explicitly Serializable.

How to return error in tJava?

How can I return error status information from a tJava component in Talend so that the on component error trigger can be used? I have custom code inside the tJava component that may throw exception. In such a situation, I would like to call tDie from tJava using the on component error trigger.
This turned out to be quite simple to achieve. Given the design:
The tJava component in the middle contained code to generate exception:
String a = null;
String b = "bar";
a.equalsIgnoreCase(b);
Running the job printed the exception and the message from tDie:
Exception in component tJava_1
java.lang.NullPointerException
at mrx_talend.test_0_1.test.tJava_1Process(test.java:374)
at mrx_talend.test_0_1.test.runJobInTOS(test.java:842)
at mrx_talend.test_0_1.test.main(test.java:699)
Exiting due to processing failure

Setting up and accessing Flink Queryable State (NullPointerException)

I am using Flink v1.4.0 and I have set up two distinct jobs. The first is a pipeline that consumes data from a Kafka Topic and stores them into a Queryable State (QS). Data are keyed by date. The second submits a query to the QS job and processes the returned data.
Both jobs were working fine with Flink v.1.3.2. But with the new update, everything has broken. Here is part of the code for the first job:
private void runPipeline() throws Exception {
StreamExecutionEnvironment env = configurationEnvironment();
QueryableStateStream<String, DataBucket> dataByDate = env.addSource(sourceDataFromKafka())
.map(NewDataClass::new)
.keyBy(data.date)
.asQueryableState("QSName", reduceIntoSingleDataBucket());
}
and here is the code on client side:
QueryableStateClient client = new QueryableStateClient("localhost", 6123);
// the state descriptor of the state to be fetched.
ValueStateDescriptor<DataBucket> descriptor = new ValueStateDescriptor<>(
"QSName",
TypeInformation.of(new TypeHint<DataBucket>() {}));
jobId = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx";
String key = "2017-01-06";
CompletableFuture<ValueState<DataBucket> resultFuture = client.getKvState(
jobId,
"QSName",
key,
BasicTypeInfo.STRING_TYPE_INFO,
descriptor);
try {
ValueState<DataBucket> valueState = resultFuture.get();
DataBucket bucket = valueState.value();
System.out.println(bucket.getLabel());
} catch (IOException | InterruptionException | ExecutionException e) {
throw new RunTimeException("Unable to query bucket key: " + key , e);
}
I have followed the instructions as per the following link:
https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/stream/state/queryable_state.html
making sure to enable the queryable state on my Flink cluster by including the flink-queryable-state-runtime_2.11-1.4.0.jar from the opt/ folder of your Flink distribution to the lib/ folder and checked it runs in the task manager.
I keep getting the following error:
Exception in thread "main" java.lang.NullPointerException
at org.apache.flink.api.java.typeutils.GenericTypeInfo.createSerializer(GenericTypeInfo.java:84)
at org.apache.flink.api.common.state.StateDescriptor.initializeSerializerUnlessSet(StateDescriptor.java:253)
at org.apache.flink.queryablestate.client.QueryableStateClient.getKvState(QueryableStateClient.java:210)
at org.apache.flink.queryablestate.client.QueryableStateClient.getKvState(QueryableStateClient.java:174)
at com.company.dept.query.QuerySubmitter.main(QuerySubmitter.java:37)
Any idea of what is happening? I think that my requests don't reach the QS at all ... Really don't know if and how I should change anything. Thanks.
So, as it turned out, it was 2 things that were causing this error. The first was the use of the wrong constructor for creating a descriptor on the client side. Rather than using the one that only takes as input a name for the QS and a TypeHint, I had to use another one where a keySerialiser along with a default value are provided as per below:
ValueStateDescriptor<DataBucket> descriptor = new ValueStateDescriptor<>(
"QSName",
TypeInformation.of(new TypeHint<DataBucket>() {}).createSerializer(new ExecutionConfig()),
DataBucket.emptyBucket()); // or anything that can be used as a default value
The second was relevant to the host and port values. The port was different from v1.3.2 now set to 9069 and the localhost was also different in my case. You can verify both by checking the logs of any task manager for the line:
Started the Queryable State Proxy Server # ....
Finally, in case you are here because you are looking to allow port-range for queryable state client proxy, I suggest you follow the respective issue (FLINK-7788) here: https://issues.apache.org/jira/browse/FLINK-7788.

Entity Framework (Add/Create) Object reference not set to an instance of an object... but it is?

I'm using entity framework to access an exsisting database. I can access the data but when I try to add new data I get a NullReferenceException "Object reference not set to an instance of an object", but it is?
The DB connection is fine, I can access the data just fine: List<log> logs = db.log.ToList();
The exception is thrown when using Add or Create:
db.log.Add(new log());
db.log.Create();
StackTrace:
at System.Data.Entity.DbSet`1.Create()
UPDATE:
The Error only occurs OUTSIDE the namespace containing the DB Context. I can work around it by wrapping the classes to have their 'Add to DB context'-methods in the DBHandler namespace. But I would like an explanation to why this happens. Is it a bug or am I violating some sacred .net law?
Thanks for your time!
Try this :
using (db = new xxxxxEntities())
{
List<log> logs = db.log.ToList();
db.log.Add(new log());
db.log.Create();
}