Native Client - Serialization Exception when executing Continuous Query - geode

I'm trying to set up a simple Java <-> #C/.NET proof of concept using Apache Geode, specifically testing the continuous query functionality using the .NET native client. Using a regular Query works fine from .NET, only the Continuous Query has an issue. I run into my problem when I call the Execute() method on the continuous query object. The specific error I get is
Got unhandled message type 26 while processing response, possible serialization mismatch
I'm only storing simple strings in the cache region so I'm a bit surprised that I'm having serialization issues. I've tried enabling PDX serialization on both sides (and running without it), it doesn't seem to make a difference. Any ideas?
Here is my code for both sides:
Java
Starts a server, puts some data, and then keeps updating a given cache entry.
public class GeodePoc {
public static void main(String[] args) throws Exception {
ServerLauncher serverLauncher = new ServerLauncher.Builder().setMemberName("server1")
.setServerBindAddress("localhost").setServerPort(10334).set("start-locator", "localhost[20341]")
.set(ConfigurationProperties.LOG_LEVEL, "trace")
.setPdxReadSerialized(true)
.set(ConfigurationProperties.CACHE_XML_FILE, "cache.xml").build();
serverLauncher.start();
Cache c = CacheFactory.getAnyInstance();
Region<String, String> r = c.getRegion("example_region");
r.put("test1", "value1");
r.put("test2", "value2");
System.out.println("Cache server successfully started");
int i = 0;
while (true) {
r.put("test1", "value" + i);
System.out.println(r.get("test1"));
Thread.sleep(3000);
i++;
}
}
}
Server cache.xml
<?xml version="1.0" encoding="UTF-8"?>
<cache xmlns="http://geode.apache.org/schema/cache" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://geode.apache.org/schema/cache http://geode.apache.org/schema/cache/cache-1.0.xsd"
version="1.0">
<cache-server bind-address="localhost" port="40404"
max-connections="100" />
<pdx>
<pdx-serializer>
<class-name>org.apache.geode.pdx.ReflectionBasedAutoSerializer</class-name>
<parameter name="classes">
<string>java.lang.String</string>
</parameter>
</pdx-serializer>
</pdx>
<region name="example_region">
<region-attributes refid="REPLICATE" />
</region>
</cache>
.NET Client
public static void GeodeTest()
{
Properties<string, string> props = Properties<string, string>.Create();
props.Insert("cache-xml-file", "<path-to-cache.xml>");
CacheFactory cacheFactory = new CacheFactory(props)
.SetPdxReadSerialized(true).SetPdxIgnoreUnreadFields(true)
.Set("log-level", "info");
Cache cache = cacheFactory.Create();
cache.TypeRegistry.PdxSerializer = new ReflectionBasedAutoSerializer();
IRegion<string, string> region = cache.GetRegion<string, string>("example_region");
Console.WriteLine(region.Get("test2", null));
PoolManager pManager = cache.GetPoolManager();
Pool pool = pManager.Find("serverPool");
QueryService qs = pool.GetQueryService();
// Regular query example (works)
Query<string> q = qs.NewQuery<string>("select * from /example_region");
ISelectResults<string> results = q.Execute();
Console.WriteLine("Finished query");
foreach (string result in results)
{
Console.WriteLine(result);
}
// Continuous Query (does not work)
CqAttributesFactory<string, object> cqAttribsFactory = new CqAttributesFactory<string, object>();
ICqListener<string, object> listener = new CacheListener<string, object>();
cqAttribsFactory.InitCqListeners(new ICqListener<string, object>[] { listener });
cqAttribsFactory.AddCqListener(listener);
CqAttributes<string, object> cqAttribs = cqAttribsFactory.Create();
CqQuery<string, object> cquery = qs.NewCq<string, object>("select * from /example_region", cqAttribs, false);
Console.WriteLine(cquery.GetState());
Console.WriteLine(cquery.QueryString);
Console.WriteLine(">>> Cache query example started.");
cquery.Execute();
Console.WriteLine();
Console.WriteLine(">>> Example finished, press any key to exit ...");
Console.ReadKey();
}
.NET Cache Listener
public class CacheListener<TKey, TResult> : ICqListener<TKey, TResult>
{
public virtual void OnEvent(CqEvent<TKey, TResult> ev)
{
object val = ev.getNewValue() as object;
TKey key = ev.getKey();
CqOperation opType = ev.getQueryOperation();
string opStr = "DESTROY";
if (opType == CqOperation.OP_TYPE_CREATE)
opStr = "CREATE";
else if (opType == CqOperation.OP_TYPE_UPDATE)
opStr = "UPDATE";
Console.WriteLine("MyCqListener::OnEvent called with key {0}, op {1}.", key, opStr);
}
public virtual void OnError(CqEvent<TKey, TResult> ev)
{
Console.WriteLine("MyCqListener::OnError called");
}
public virtual void Close()
{
Console.WriteLine("MyCqListener::close called");
}
}
.NET Client cache.xml
<client-cache
xmlns="http://geode.apache.org/schema/cache"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://geode.apache.org/schema/cache http://geode.apache.org/schema/cache/cache-1.0.xsd"
version="1.0">
<pool name="serverPool" subscription-enabled="true">
<locator host="localhost" port="20341"/>
</pool>
<region name="example_region">
<region-attributes refid="CACHING_PROXY" pool-name="serverPool" />
</region>
</client-cache>

This ended up being a simple oversight on my part. In order for continuous query to function you must include the geode-cq dependency on the Java side. I didn't do this, and this caused the exception.

Related

I am not getting results from ksql StreamQuery integrated with java. when I am printing log for client is showing not completed

I am using confluent kafka version 6.o
downloaded from https://www.confluent.io/download/
I am referring
https://docs.ksqldb.io/en/latest/developer-guide/ksqldb-clients/java-client/ acritical .
https://www.youtube.com/watch?v=85udigshlNI
with java producer code I am able to send value to ksql. But not able to retrieve this value.
when I am printing log for streamQuery result, I am getting Not Completed message.
used Maven dependency as:
<dependencies>
<dependency>
<groupId>io.confluent.ksql</groupId>
<artifactId>ksqldb-api-client</artifactId>
<version>${ksqldb.version}</version>
</dependency>
</dependencies>
java code :
public class ExampleApp {
public static String KSQLDB_SERVER_HOST = "localhost";
public static int KSQLDB_SERVER_HOST_PORT = 8088;
public static void main(String[] args) {
ClientOptions options = ClientOptions.create()
.setHost(KSQLDB_SERVER_HOST)
.setPort(KSQLDB_SERVER_HOST_PORT);
Client client = Client.create(options);
// Send requests with the client by following the other examples
// Terminate any open connections and close the client
client.close();
}
}public class ExampleApp {
public static String KSQLDB_SERVER_HOST = "localhost";
public static int KSQLDB_SERVER_HOST_PORT = 8088;
public static void main(String[] args) {
ClientOptions options = ClientOptions.create()
.setHost(KSQLDB_SERVER_HOST)
.setPort(KSQLDB_SERVER_HOST_PORT);
Client client = Client.create(options);
StreamedQueryResult streamedQueryResult = client.streamQuery("SELECT * FROM MY_STREAM EMIT CHANGES;").get();
for (int i = 0; i < 10; i++) {
// Block until a new row is available
Row row = streamedQueryResult.poll();
if (row != null) {
System.out.println("Received a row!");
System.out.println("Row: " + row.values());
} else {
System.out.println("Query has ended.");
}
}
client.close();
}
}
output :
get() is waiting for long time even after adding values into topic waiting and finally gives timeout exception.

Camel mongodb - MongoDbProducer multiple inserts

I am trying to do a multiple insert using the camel mongo db component.
My Pojo representation is :
Person {
String firstName;
String lastName;
}
I have a processor which constructs a valid List of Person pojo and is a valid json structure.
When this list of Person is sent to the mongodb producer , on invocation of createDoInsert the type conversion to BasicDBObject fails. This piece of code below looks to be the problem. Should it have more fall backs / checks in place to attempt the list conversion down further below as it fails on the very first cast itself. Debugging the MongoDbProducer the exchange object being received is a DBList which extends DBObject. This causes the singleInsert flag to remain set at true which fails the insertion below as we get a DBList instead of a BasicDBObject :
if(singleInsert) {
BasicDBObject insertObjects = (BasicDBObject)insert;
dbCol.insertOne(insertObjects);
exchange1.getIn().setHeader("CamelMongoOid", insertObjects.get("_id"));
}
The Camel MongoDbProducer code fragment
private Function<Exchange, Object> createDoInsert() {
return (exchange1) -> {
MongoCollection dbCol = this.calculateCollection(exchange1);
boolean singleInsert = true;
Object insert = exchange1.getIn().getBody(DBObject.class);
if(insert == null) {
insert = exchange1.getIn().getBody(List.class);
if(insert == null) {
throw new CamelMongoDbException("MongoDB operation = insert, Body is not conversible to type DBObject nor List<DBObject>");
}
singleInsert = false;
insert = this.attemptConvertToList((List)insert, exchange1);
}
if(singleInsert) {
BasicDBObject insertObjects = (BasicDBObject)insert;
dbCol.insertOne(insertObjects);
exchange1.getIn().setHeader("CamelMongoOid", insertObjects.get("_id"));
} else {
List insertObjects1 = (List)insert;
dbCol.insertMany(insertObjects1);
ArrayList objectIdentification = new ArrayList(insertObjects1.size());
objectIdentification.addAll((Collection)insertObjects1.stream().map((insertObject) -> {
return insertObject.get("_id");
}).collect(Collectors.toList()));
exchange1.getIn().setHeader("CamelMongoOid", objectIdentification);
}
return insert;
};
}
My route is as below :
<route id="uploadFile">
<from uri="jetty://http://0.0.0.0:9886/test"/>
<process ref="fileProcessor"/>
<unmarshal>
<csv>
<header>fname</header>
<header>lname</header>
</csv>
</unmarshal>
<process ref="mongodbProcessor" />
<to uri="mongodb:mongoBean?database=axs175&collection=insurance&operation=insert" />
and the MongoDBProcessor constructing the List of Person Pojo
#Component
public class MongodbProcessor implements Processor {
#Override
public void process(Exchange exchange) throws Exception {
ArrayList<List<String>> personlist = (ArrayList) exchange.getIn().getBody();
ArrayList<Person> persons = new ArrayList<>();
for(List<String> records : personlist){
Person person = new Person();
person.setFname(records.get(0));
person.setLname(records.get(1));
persons.add(person);
}
exchange.getIn().setBody(persons);
}
}
Also requested information here - http://camel.465427.n5.nabble.com/Problems-with-MongoDbProducer-multiple-inserts-tc5792644.html
This issue is now fixed via - https://issues.apache.org/jira/browse/CAMEL-10728

Why use Kryo serialize framework into apache storm will over write data when blot get values

Maybe mostly develop were use AVRO as serialize framework in Kafka and Apache Storm scheme. But I need handle most complex data then I found the Kryo serialize framework also were successfully integrate it into our project which follow Kafka and Apache Storm environment. But when want to further operation there had a strange status.
I had sent 5 times message to Kafka, the Storm job also can read the 5 messages and deserialize success. But next blot get the data value is wrong. There print out the same value as the last message. Then I had add the print out after when complete the deserialize code. Actually it print out true there had different 5 message. Why the next blot can't the values? See my code below:
KryoScheme.java
public abstract class KryoScheme<T> implements Scheme {
private static final long serialVersionUID = 6923985190833960706L;
private static final Logger logger = LoggerFactory.getLogger(KryoScheme.class);
private Class<T> clazz;
private Serializer<T> serializer;
public KryoScheme(Class<T> clazz, Serializer<T> serializer) {
this.clazz = clazz;
this.serializer = serializer;
}
#Override
public List<Object> deserialize(byte[] buffer) {
Kryo kryo = new Kryo();
kryo.register(clazz, serializer);
T scheme = null;
try {
scheme = kryo.readObject(new Input(new ByteArrayInputStream(buffer)), this.clazz);
logger.info("{}", scheme);
} catch (Exception e) {
String errMsg = String.format("Kryo Scheme failed to deserialize data from Kafka to %s. Raw: %s",
clazz.getName(),
new String(buffer));
logger.error(errMsg, e);
throw new FailedException(errMsg, e);
}
return new Values(scheme);
}}
PrintFunction.java
public class PrintFunction extends BaseFunction {
private static final Logger logger = LoggerFactory.getLogger(PrintFunction.class);
#Override
public void execute(TridentTuple tuple, TridentCollector collector) {
List<Object> data = tuple.getValues();
if (data != null) {
logger.info("Scheme data size: {}", data.size());
for (Object value : data) {
PrintOut out = (PrintOut) value;
logger.info("{}.{}--value: {}",
Thread.currentThread().getName(),
Thread.currentThread().getId(),
out.toString());
collector.emit(new Values(out));
}
}
}}
StormLocalTopology.java
public class StormLocalTopology {
public static void main(String[] args) {
........
BrokerHosts zk = new ZkHosts("xxxxxx");
Config stormConf = new Config();
stormConf.put(Config.TOPOLOGY_DEBUG, false);
stormConf.put(Config.TOPOLOGY_TRIDENT_BATCH_EMIT_INTERVAL_MILLIS, 1000 * 5);
stormConf.put(Config.TOPOLOGY_WORKERS, 1);
stormConf.put(Config.TOPOLOGY_MESSAGE_TIMEOUT_SECS, 5);
stormConf.put(Config.TOPOLOGY_TASKS, 1);
TridentKafkaConfig actSpoutConf = new TridentKafkaConfig(zk, topic);
actSpoutConf.fetchSizeBytes = 5 * 1024 * 1024 ;
actSpoutConf.bufferSizeBytes = 5 * 1024 * 1024 ;
actSpoutConf.scheme = new SchemeAsMultiScheme(scheme);
actSpoutConf.startOffsetTime = kafka.api.OffsetRequest.LatestTime();
TridentTopology topology = new TridentTopology();
TransactionalTridentKafkaSpout actSpout = new TransactionalTridentKafkaSpout(actSpoutConf);
topology.newStream(topic, actSpout).parallelismHint(4).shuffle()
.each(new Fields("act"), new PrintFunction(), new Fields());
LocalCluster cluster = new LocalCluster();
cluster.submitTopology(topic+"Topology", stormConf, topology.build());
}}
There also other problem why the kryo scheme only can read one message buffer. Is there other way get multi messages buffer then can batch send data to next blot.
Also if I send 1 message the full flow seems success.
Then send 2 message is wrong. the print out message like below:
56157 [Thread-18-spout0] INFO s.s.a.s.s.c.KryoScheme - 2016-02- 05T17:20:48.122+0800,T6mdfEW#N5pEtNBW
56160 [Thread-20-b-0] INFO s.s.a.s.s.PrintFunction - Scheme data size: 1
56160 [Thread-18-spout0] INFO s.s.a.s.s.c.KryoScheme - 2016-02- 05T17:20:48.282+0800,T(o2KnFxtGB0Tlp8
56161 [Thread-20-b-0] INFO s.s.a.s.s.PrintFunction - Thread-20-b-0.99--value: 2016-02-05T17:20:48.282+0800,T(o2KnFxtGB0Tlp8
56162 [Thread-20-b-0] INFO s.s.a.s.s.PrintFunction - Scheme data size: 1
56162 [Thread-20-b-0] INFO s.s.a.s.s.PrintFunction - Thread-20-b-0.99--value: 2016-02-05T17:20:48.282+0800,T(o2KnFxtGB0Tlp8
I'm sorry this my mistake. Just found a bug in Kryo deserialize class, there exist an local scope parameter, so it can be over write in multi thread environment. Not change the parameter in party scope, the code run well.
reference code see blow:
public class KryoSerializer<T extends BasicEvent> extends Serializer<T> implements Serializable {
private static final long serialVersionUID = -4684340809824908270L;
// It's wrong set
//private T event;
public KryoSerializer(T event) {
this.event = event;
}
#Override
public void write(Kryo kryo, Output output, T event) {
event.write(output);
}
#Override
public T read(Kryo kryo, Input input, Class<T> type) {
T event = new T();
event.read(input);
return event;
}
}

Spring Cloud - Getting Retry Working In RestTemplate?

I have been migrating an existing application over to Spring Cloud's service discovery, Ribbon load balancing, and circuit breakers. The application already makes extensive use of the RestTemplate and I have been able to successfully use the load balanced version of the template. However, I have been testing the situation where there are two instances of a service and I drop one of those instances out of operation. I would like the RestTemplate to failover to the next server. From the research I have done, it appears that the fail-over logic exists in the Feign client and when using Zuul. It appears that the LoadBalancedRest template does not have logic for fail-over. In diving into the code, it looks like the RibbonClientHttpRequestFactory is using the netflix RestClient (which appears to have logic for doing retries).
So where do I go from here to get this working?
I would prefer to not use the Feign client because I would have to sweep A LOT of code.
I had found this link that suggested using the #Retryable annotation along with #HystrixCommand but this seems like something that should be a part of the load balanced rest template.
I did some digging into the code for RibbonClientHttpRequestFactory.RibbonHttpRequest:
protected ClientHttpResponse executeInternal(HttpHeaders headers) throws IOException {
try {
addHeaders(headers);
if (outputStream != null) {
outputStream.close();
builder.entity(outputStream.toByteArray());
}
HttpRequest request = builder.build();
HttpResponse response = client.execute(request, config);
return new RibbonHttpResponse(response);
}
catch (Exception e) {
throw new IOException(e);
}
}
It appears that if I override this method and change it to use "client.executeWithLoadBalancer()" that I might be able to leverage the retry logic that is built into the RestClient? I guess I could create my own version of the RibbonClientHttpRequestFactory to do this?
Just looking for guidance on the best approach.
Thanks
To answer my own question:
Before I get into the details, a cautionary tale:
Eureka's self preservation mode sent me down a rabbit hole while testing the fail-over on my local machine. I recommend turning self preservation mode off while doing your testing. Because I was dropping nodes at a regular rate and then restarting (with a different instance ID using a random value), I tripped Eureka's self preservation mode. I ended up with many instances in Eureka that pointed to the same machine, same port. The fail-over was actually working but the next node that was chosen happened to be another dead instance. Very confusing at first!
I was able to get fail-over working with a modified version of RibbonClientHttpRequestFactory. Because RibbonAutoConfiguration creates a load balanced RestTemplate with this factory, rather then injecting this rest template, I create a new one with my modified version of the request factory:
protected RestTemplate restTemplate;
#Autowired
public void customizeRestTemplate(SpringClientFactory springClientFactory, LoadBalancerClient loadBalancerClient) {
restTemplate = new RestTemplate();
// Use a modified version of the http request factory that leverages the load balacing in netflix's RestClient.
RibbonRetryHttpRequestFactory lFactory = new RibbonRetryHttpRequestFactory(springClientFactory, loadBalancerClient);
restTemplate.setRequestFactory(lFactory);
}
The modified Request Factory is just a copy of RibbonClientHttpRequestFactory with two minor changes:
1) In createRequest, I removed the code that was selecting a server from the load balancer because the RestClient will do that for us.
2) In the inner class, RibbonHttpRequest, I changed executeInternal to call "executeWithLoadBalancer".
The full class:
#SuppressWarnings("deprecation")
public class RibbonRetryHttpRequestFactory implements ClientHttpRequestFactory {
private final SpringClientFactory clientFactory;
private LoadBalancerClient loadBalancer;
public RibbonRetryHttpRequestFactory(SpringClientFactory clientFactory, LoadBalancerClient loadBalancer) {
this.clientFactory = clientFactory;
this.loadBalancer = loadBalancer;
}
#Override
public ClientHttpRequest createRequest(URI originalUri, HttpMethod httpMethod) throws IOException {
String serviceId = originalUri.getHost();
IClientConfig clientConfig = clientFactory.getClientConfig(serviceId);
RestClient client = clientFactory.getClient(serviceId, RestClient.class);
HttpRequest.Verb verb = HttpRequest.Verb.valueOf(httpMethod.name());
return new RibbonHttpRequest(originalUri, verb, client, clientConfig);
}
public class RibbonHttpRequest extends AbstractClientHttpRequest {
private HttpRequest.Builder builder;
private URI uri;
private HttpRequest.Verb verb;
private RestClient client;
private IClientConfig config;
private ByteArrayOutputStream outputStream = null;
public RibbonHttpRequest(URI uri, HttpRequest.Verb verb, RestClient client, IClientConfig config) {
this.uri = uri;
this.verb = verb;
this.client = client;
this.config = config;
this.builder = HttpRequest.newBuilder().uri(uri).verb(verb);
}
#Override
public HttpMethod getMethod() {
return HttpMethod.valueOf(verb.name());
}
#Override
public URI getURI() {
return uri;
}
#Override
protected OutputStream getBodyInternal(HttpHeaders headers) throws IOException {
if (outputStream == null) {
outputStream = new ByteArrayOutputStream();
}
return outputStream;
}
#Override
protected ClientHttpResponse executeInternal(HttpHeaders headers) throws IOException {
try {
addHeaders(headers);
if (outputStream != null) {
outputStream.close();
builder.entity(outputStream.toByteArray());
}
HttpRequest request = builder.build();
HttpResponse response = client.executeWithLoadBalancer(request, config);
return new RibbonHttpResponse(response);
}
catch (Exception e) {
throw new IOException(e);
}
//TODO: fix stats, now that execute is not called
// use execute here so stats are collected
/*
return loadBalancer.execute(this.config.getClientName(), new LoadBalancerRequest<ClientHttpResponse>() {
#Override
public ClientHttpResponse apply(ServiceInstance instance) throws Exception {}
});
*/
}
private void addHeaders(HttpHeaders headers) {
for (String name : headers.keySet()) {
// apache http RequestContent pukes if there is a body and
// the dynamic headers are already present
if (!isDynamic(name) || outputStream == null) {
List<String> values = headers.get(name);
for (String value : values) {
builder.header(name, value);
}
}
}
}
private boolean isDynamic(String name) {
return name.equals("Content-Length") || name.equals("Transfer-Encoding");
}
}
public class RibbonHttpResponse extends AbstractClientHttpResponse {
private HttpResponse response;
private HttpHeaders httpHeaders;
public RibbonHttpResponse(HttpResponse response) {
this.response = response;
this.httpHeaders = new HttpHeaders();
List<Map.Entry<String, String>> headers = response.getHttpHeaders().getAllHeaders();
for (Map.Entry<String, String> header : headers) {
this.httpHeaders.add(header.getKey(), header.getValue());
}
}
#Override
public InputStream getBody() throws IOException {
return response.getInputStream();
}
#Override
public HttpHeaders getHeaders() {
return this.httpHeaders;
}
#Override
public int getRawStatusCode() throws IOException {
return response.getStatus();
}
#Override
public String getStatusText() throws IOException {
return HttpStatus.valueOf(response.getStatus()).name();
}
#Override
public void close() {
response.close();
}
}
}
I had the same problem but then, out of the box, everything was working (using a #LoadBalanced RestTemplate). I am using Finchley version of Spring Cloud, and I think my problem was that I was not explicity adding spring-retry in my pom configuration. I'll leave here my spring-retry related yml configuration (remember this only works with #LoadBalanced RestTemplate, Zuul of Feign):
spring:
# Ribbon retries on
cloud:
loadbalancer:
retry:
enabled: true
# Ribbon service config
my-service:
ribbon:
MaxAutoRetries: 3
MaxAutoRetriesNextServer: 1
OkToRetryOnAllOperations: true
retryableStatusCodes: 500, 502

Spring AOP #Aspect J : how do I give Aspects access to other classes

I am quite new to Java and Spring. I would like to find out if it is possible and if so how I can get my aspects to apply to more than one class without having to call the method from the class where the aspects "work".
This is my main class. Aspects work on any methods I call diresctly from this class, but will not work on any of the other methods called by other classes (even if they are not internal)
public class AopMain {
public static void main(String[] args) {
String selection = "on";
ApplicationContext ctx = new ClassPathXmlApplicationContext("spring.xml");
do {
try{
System.out.println("Enter 'length' for a length conversion and 'temperature' for a temperature conversion and 'quit' to quit");
BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
selection = br.readLine();
if(selection.contentEquals("length")) {
LengthService lengthService = ctx.getBean("lengthService", LengthService.class);
lengthService.runLengthService();
lengthService.display();
}
else if(selection.contentEquals("temperature")) {
TemperatureService temperatureService = new TemperatureService();
temperatureService.runTempertureService();
temperatureService.display();
}
}
catch (Exception e) {
System.out.println("Input error");
}
} while (!selection.contentEquals("quit"));
}
}
This is one of the conversion service classes:
public class TemperatureService {
String fromUnit = null;
String toUnit = null;
double val = 0;
double converted = 0;
public void runTempertureService() {
Scanner in = new Scanner(System.in);
System.out.println("Convert from (enter C, K, F): ");
fromUnit = in.nextLine();
System.out.println("Convert to (enter C, K, F): ");
toUnit = in.nextLine();
TemperatureConverter from = new TemperatureConverter(fromUnit);
TemperatureConverter to = new TemperatureConverter(toUnit);
System.out.println("Value:");
val = in.nextDouble();
double celcius = from.toCelcius(val);
converted = to.fromCelcius(celcius);
from.display(val, fromUnit, converted, toUnit);
System.out.println(val + " " + fromUnit + " = " + converted + " " + toUnit);
}
public String[] display(){
String[] displayString = {Double.toString(val), fromUnit, Double.toString(converted), toUnit};
return displayString;
}
}
And this is one of the conversion classes:
public class TemperatureConverter {
final double C_TO_F = 33.8;
final double C_TO_C = 1;
final double C_TO_KELVIN = 274.15;
private double factor;
public TemperatureConverter(String unit) {
if (unit.contentEquals("F"))
factor = C_TO_F;
else if(unit.contentEquals("C"))
factor = C_TO_C;
else if(unit.contentEquals("K"))
factor = C_TO_KELVIN;
}
public double toCelcius(double measurement) {
return measurement * factor;
}
public double fromCelcius(double measurement) {
return measurement/factor;
}
public TemperatureConverter() {}
public void display(double val, String fromUnit, double converted, String toUnit) {}
}
This is my configuration file:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:aop="http://www.springframework.org/schema/aop" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-2.5.xsd">
<aop:aspectj-autoproxy/>
<bean name= "lengthConverter" class= "converter.method.LengthConverter"/>
<bean name= "temperatureConverter" class= "converter.method.TemperatureConverter"/>
<bean name= "lengthService" class= "converter.service.LengthService" autowire = "byName"/>
<bean name= "temperatureService" class= "converter.service.TemperatureService"/>
<bean name="ValidationAspect" class= "converter.aspect.ValidationAspect" />
<bean name="DisplayAspect" class= "converter.aspect.DisplayAspect" />
</beans>
I want to be able to apply an aspect to functions of the converter class called by the service class but like I have mentioned, it doesnt work unnless the method is called from the main class directly. (the display function was originally part of the converter class but I moved it so that the aspect would work). Also why will an aspect not pick up the newline() method call?
Edit:
This is one of my aspects:
#Aspect
public class DisplayAspect {
#AfterReturning(pointcut = "execution(* display(..))", returning = "retVal")
public void fileSetUp(Object retVal) {
System.out.println("So we found the display things");
Writer writer = null;
String[] returnArray = (String[]) retVal;
try {
System.out.println("inside try");
String text = "The opertion performed was: " + returnArray[0] + " in " + returnArray[1] + " is " + returnArray[2] + " " + returnArray[3] + "\n";
File file = new File("Log.txt");
writer = new BufferedWriter(new FileWriter(file, true));
writer.write(text);
} catch (FileNotFoundException e1) {
e1.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
if (writer != null) {
writer.close();
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
I want to be able to apply an aspect to functions of the converter class
Well, then change your pointcut so as intercept the methods (not functions, they are called methods) you want to handle in your advice. At the moment the pointcut is
execution(* display(..))
I.e. it will intercept all methods named display with any number of parameters and any return type. If you want to intercept all converter methods instead, change it to
execution(* converter.method.TemperatureConverter.*(..))
instead.
like I have mentioned, it doesnt work unnless the method is called from the main class directly.
I need to guess because this description is unclear, but probably What you are trying to describe is that the advice is only applied if the TemperatureService.display() is called from outside the class, not from a method within TemperatureService. This is a known and well described limitation of Spring AOP, see Spring Manual, chapter 9.6, "Proxying mechanisms": Due to the proxy-based "AOP lite" approach of Spring AOP, this cannot work because internal calls to methods of this, are not routed through the dynamic proxy created by the Spring container. Thus, Spring AOP only works for inter-bean calls, not intra-bean ones. If you need to intercept internal calls, you need to switch to full-blown AspectJ, which can be easily integrated into Spring applications via LTW (load-time weaving) as described in chapter 9.8, "Using AspectJ with Spring applications".