Issue with Spring Batch Application - spring-batch

I am reading huge dataset from mysql and sending to one of the java micro-service which in turns dump the data into mongoDB.
But getting the following error:
org.springframework.batch.core.JobExecutionException: Partition handler returned an unsuccessful step
at org.springframework.batch.core.partition.support.PartitionStep.doExecute(PartitionStep.java:112)
at org.springframework.batch.core.step.AbstractStep.execute(AbstractStep.java:203)
at org.springframework.batch.core.job.SimpleStepHandler.handleStep(SimpleStepHandler.java:148)
at org.springframework.batch.core.job.flow.JobFlowExecutor.executeStep(JobFlowExecutor.java:68)
at org.springframework.batch.core.job.flow.support.state.StepState.handle(StepState.java:67)
at org.springframework.batch.core.job.flow.support.SimpleFlow.resume(SimpleFlow.java:169)
at org.springframework.batch.core.job.flow.support.SimpleFlow.start(SimpleFlow.java:144)
at org.springframework.batch.core.job.flow.FlowJob.doExecute(FlowJob.java:136)
at org.springframework.batch.core.job.AbstractJob.execute(AbstractJob.java:313)
at org.springframework.batch.core.launch.support.SimpleJobLauncher$1.run(SimpleJobLauncher.java:144)
at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:50)
at org.springframework.batch.core.launch.support.SimpleJobLauncher.run(SimpleJobLauncher.java:137)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:343)
at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:198)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at org.springframework.batch.core.configuration.annotation.SimpleBatchConfiguration$PassthruAdvice.invoke(SimpleBatchConfiguration.java:127)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212)
at com.sun.proxy.$Proxy87.run(Unknown Source)
at org.springframework.boot.autoconfigure.batch.JobLauncherCommandLineRunner.execute(JobLauncherCommandLineRunner.java:214)
at org.springframework.boot.autoconfigure.batch.JobLauncherCommandLineRunner.executeLocalJobs(JobLauncherCommandLineRunner.java:186)
at org.springframework.boot.autoconfigure.batch.JobLauncherCommandLineRunner.launchJobFromProperties(JobLauncherCommandLineRunner.java:172)
Checked the core table of Spring batch and it seems the tasks are NOT getting completed and hence may be running out of:
select STATUS,EXIT_CODE from BATCH_STEP_EXECUTION;
+-----------+-----------+
| STATUS | EXIT_CODE |
+-----------+-----------+
| FAILED | FAILED |
| STARTING | EXECUTING |
| COMPLETED | COMPLETED |
| COMPLETED | COMPLETED |
| STARTING | EXECUTING |
| STARTING | EXECUTING |
| STARTING | EXECUTING |
| STARTING | EXECUTING |
| STARTING | EXECUTING |
| STARTING | EXECUTING |
| STARTING | EXECUTING |
| STARTING | EXECUTING |
| STARTING | EXECUTING |
| STARTING | EXECUTING |
| STARTING | EXECUTING |
| COMPLETED | COMPLETED |
| STARTING | EXECUTING |
| STARTING | EXECUTING |
| COMPLETED | COMPLETED |
| COMPLETED | COMPLETED |
| STARTING | EXECUTING |
| STARTING | EXECUTING |
| COMPLETED | COMPLETED |
| STARTING | EXECUTING |
| STARTING | EXECUTING |
| STARTING | EXECUTING |
| COMPLETED | COMPLETED |
| STARTING | EXECUTING |
| STARTING | EXECUTING |
| STARTING | EXECUTING |
| STARTING | EXECUTING |
| STARTING | EXECUTING |
Only 10% of tasks were in completed state.
Sample Code:
#Bean
public Step slaveStep() {
StepBuilder stepBuilder = stepBuilderFactory.get("slave-step");
return stepBuilder.<CarDekhoUser, ConnectoUserDto>chunk(20)
.reader(pagingItemReader(null, null))
.processor(dataProcessor())
.writer(mosaicApiWriter())
.faultTolerant()
.retryLimit(3)
.retry(ConnectionPoolTimeoutException.class)
.retry(ConnectTimeoutException.class)
.retry(DeadlockLoserDataAccessException.class)
.build();
}
#Bean
public Step masterStep() {
return stepBuilderFactory.get("master-step")
.partitioner(slaveStep().getName(), dataPartitioner())
.step(slaveStep())
.gridSize(100)
.taskExecutor(taskExecutor())
.build();
}
#Bean
public TaskExecutor taskExecutor() {
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setMaxPoolSize(10);
taskExecutor.setCorePoolSize(10);
taskExecutor.setQueueCapacity(10);
taskExecutor.setThreadNamePrefix("Slave-Task-Executor");
taskExecutor.setAllowCoreThreadTimeOut(true);
return taskExecutor;
}
Partition code
public class CDPartitioner implements Partitioner {
#Override
public Map<String, ExecutionContext> partition(int gridSize) {
int min=5000000;
int max = 5050000;
LOG.info(" **** Start Id " + min + " End Id " + max);
int targetSize = (max - min) / gridSize + 1;
LOG.info(" **** Start Id " + min + " End Id " + max + " target size " + targetSize);
Map<String, ExecutionContext> result = new HashMap<>();
int number = 0;
int start = min;
int end = start + targetSize - 1;
while (start <= max) {
ExecutionContext value = new ExecutionContext();
result.put("partition" + number, value);
if (end >= max) {
end = max;
}
value.putInt("minValue", start);
value.putInt("maxValue", end);
start += targetSize;
end += targetSize;
number++;
}
LOG.info(" **** Partition Details " + result);
return result;
}
}

Related

JPA : how to generate a common id?

I need to allocate a same unique id (batchid) for each row inserted in a BD during a batch execution as illustrated below.
| id | batchid |
| -- | ------- |
| 1 | 1 |
| 2 | 1 |
| 3 | 2 |
| 4 | 2 |
| 5 | 2 |
| 6 | 3 |
Was wondering if there is an automated way to do it with jpa annotation, like with a sequence ?
Did it for now this way:
#Repository
public interface SeqRepository extends JpaRepository<CsvEntity, Long> {
#Query(value = "SELECT nextval('batch_id_seq')", nativeQuery = true)
Integer getNextBatchId();
}
schema.sql
CREATE SEQUENCE IF NOT EXISTS batch_id_seq
INCREMENT 1
START 1;

Kafka windowed stream make grace and suppress key aware

I currently have a simple stream of data, for example:
|-----|--------|-------|
| Key | TS(ms) | Value |
|-----|--------|-------|
| A | 1000 | 0 |
| A | 1000 | 0 |
| A | 61000 | 0 |
| A | 61000 | 0 |
| A | 121000 | 0 |
| A | 121000 | 0 |
| A | 181000 | 10 |
| A | 181000 | 10 |
| A | 241000 | 10 |
| A | 241000 | 10 |
| B | 1000 | 0 |
| B | 1000 | 0 |
| B | 61000 | 0 |
| B | 61000 | 0 |
| B | 121000 | 0 |
| B | 121000 | 0 |
| B | 181000 | 10 |
| B | 181000 | 10 |
| B | 1000 | 10 |
| B | 241000 | 10 |
| B | 241000 | 10 |
|-----|--------|-------|
this is also the order I publish the data in a topic, the value isn't really an integer but an avro value but the key is a string.
My code is this:
KStream<Windowed<String>, Long> aggregatedStream = inputStream
.groupByKey()
.windowedBy(TimeWindows.of(Duration.ofMinutes(1)).grace(Duration.ZERO))
.count()
.toStream();
aggregatedStream.print(Printed.toSysOut());
The output of the print is:
[KTABLE-TOSTREAM-0000000003]: [A#0/60000], 1
[KTABLE-TOSTREAM-0000000003]: [A#0/60000], 2
[KTABLE-TOSTREAM-0000000003]: [A#60000/120000], 1
[KTABLE-TOSTREAM-0000000003]: [A#60000/120000], 2
[KTABLE-TOSTREAM-0000000003]: [A#120000/180000], 1
[KTABLE-TOSTREAM-0000000003]: [A#120000/180000], 2
[KTABLE-TOSTREAM-0000000003]: [A#180000/240000], 1
[KTABLE-TOSTREAM-0000000003]: [A#180000/240000], 2
[KTABLE-TOSTREAM-0000000003]: [A#240000/300000], 1
[KTABLE-TOSTREAM-0000000003]: [A#240000/300000], 2
[KTABLE-TOSTREAM-0000000003]: [B#240000/300000], 1
[KTABLE-TOSTREAM-0000000003]: [B#240000/300000], 2
It seems that the grace period applies globally independently of the key of the stream, what I expect instead (if possible) is to receive all the 10 window counts of key A and the 10 window counts of key B.
In a way that the grace only closes windows based on the key of the stream.
Is that possible?
It seems that grace and suppress uses a global timestamp for each partition, so it's not possible to have a different one per each key.
What's possible instead is to disable the grace period and use a custom transformer instead of the regular suppress to do be able to suppress by key.
For example this is part of our code:
KStream<String, ...> aggregatedStream = pairsStream
.groupByKey()
.windowedBy(TimeWindows.of(Duration.ofMinutes(1)))
.aggregate(...your aggregation logic...)
.toStream()
.flatTransform(new TransformerSupplier<Windowed<String>, AggregateOutput, Iterable<KeyValue<String, SuppressedOutput>>>() {
#Override
public Transformer<Windowed<String>, AggregateOutput, Iterable<KeyValue<String, SuppressedOutput>>> get() {
return new Transformer<Windowed<String>, AggregateOutput, Iterable<KeyValue<String, SuppressedOutput>>>() {
KeyValueStore<String, SuppressedOutput> store;
#SuppressWarnings("unchecked")
#Override
public void init(ProcessorContext context) {
store = (KeyValueStore<String, SuppressedOutput>) context.getStateStore("suppress-store");
}
#Override
public Iterable<KeyValue<String, SuppressedOutput>> transform(Windowed<String> window, AggregateOutput sequenceList) {
String messageKey = window.key();
long windowEndTimestamp = window.window().endTime().toEpochMilli();
SuppressedOutput currentSuppressedOutput = new SuppressedOutput(windowEndTimestamp, sequenceList);
SuppressedOutput storeValue = store.get(messageKey);
if (storeValue == null) {
// First time we receive a window for that key
}
if (windowEndTimestamp > storeValue.getTimestamp()) {
// Received a new window
}
if (windowEndTimestamp < storeValue.getTimestamp()) {
// Window older than the last window we've received
}
store.put(messageKey, currentSuppressedOutput);
return new ArrayList<>();
}
#Override
public void close() {
}
};
}
}, "suppress-store")

OrientDB distributed mode : data not getting distributed across various nodes

I have started an OrientDB Enterprise 2.7 with two nodes. Here is how my setup look.
CONFIGURED SERVERS
+----+------+------+-----------+-------------------+---------------+---------------+-----------------+-----------------+---------+
|# |Name |Status|Connections|StartedOn |Binary |HTTP |UsedMemory |FreeMemory |MaxMemory|
+----+------+------+-----------+-------------------+---------------+---------------+-----------------+-----------------+---------+
|0 |Batman|ONLINE|3 |2016-08-16 15:28:23|10.0.0.195:2424|10.0.0.195:2480|480.98MB (94.49%)|28.02MB (5.51%) |509.00MB |
|1 |Robin |ONLINE|3 |2016-08-16 15:29:40|10.0.0.37:2424 |10.0.0.37:2480 |403.50MB (79.35%)|105.00MB (20.65%)|508.50MB |
+----+------+------+-----------+-------------------+---------------+---------------+-----------------+-----------------+---------+
orientdb {db=SocialPosts3}> clusters
Now I have two Vertex classes User and Notes. With an edge type Posted. All Vertex and Edges have properties. There are also unique index on both the Vertex class.
I started pushing data using Java API :
while (retry++ != MAX_RETRY) {
try {
properties.put(uniqueIndexname, uniqueIndexValue);
Iterable<Vertex> resultset = graph.getVertices(className, new String[] { uniqueIndexname },
new Object[] { uniqueIndexValue });
if (resultset != null) {
vertex = resultset.iterator().hasNext() ? resultset.iterator().next() : null;
}
if (vertex == null) {
vertex = graph.addVertex("class:" + className, properties);
graph.commit();
return vertex;
} else {
for (String key : properties.keySet()) {
vertex.setProperty(key, properties.get(key));
}
}
logger.info("Completed upserting vertex " + uniqueIndexValue);
graph.commit();
break;
} catch (ONeedRetryException ex) {
logger.warn("Retry for exception - " + uniqueIndexValue);
} catch (Exception e) {
logger.error("Can not create vertex - " + e.getMessage());
graph.rollback();
break;
}
}
Similarly for the Notes and edges.
I populate around 200k user and 3.5M Notes. Now I notice that all the data is going only to one node.
On running "clusters" command I see that all the clusters are created on the same node, and hence all data is present only on one node.
|22 |note | 26|Note | | 75| Robin | [Batman] | true |
|23 |note_1 | 27|Note | |1750902| Batman | [Robin] | true |
|24 |note_2 | 28|Note | |1750789| Batman | [Robin] | true |
|25 |note_3 | 29|Note | | 75| Robin | [Batman] | true |
|26 |posted | 34|Posted | | 0| Robin | [Batman] | true |
|27 |posted_1 | 35|Posted | | 1| Robin | [Batman] | true |
|28 |posted_2 | 36|Posted | |1739823| Batman | [Robin] | true |
|29 |posted_3 | 37|Posted | |1749250| Batman | [Robin] | true |
|30 |user | 30|User | | 102059| Batman | [Robin] | true |
|31 |user_1 | 31|User | | 1| Robin | [Batman] | true |
|32 |user_2 | 32|User | | 0| Robin | [Batman] | true |
|33 |user_3 | 33|User | | 102127| Batman | [Robin] | true |
I see the CPU of one node is like 99% and other is <1%.
How can I make sure that data is uniformly distributed across all nodes in the cluster?
Update:
Database is propagated to both the nodes. I can login to both the node studio and see the listed database. Also querying any node gives same results, so nodes are in sync.
Server Log from one of the node, and it is almost same on the other node.
2016-08-18 19:28:49:668 INFO [Robin]<-[Batman] Received new status Batman.SocialPosts3=SYNCHRONIZING [OHazelcastPlugin]
2016-08-18 19:28:49:670 INFO [Robin] Current node started as MASTER for database 'SocialPosts3' [OHazelcastPlugin]
2016-08-18 19:28:49:671 INFO [Robin] New distributed configuration for database: SocialPosts3 (version=2)
CLUSTER CONFIGURATION (LEGEND: X = Owner, o = Copy)
+--------+-----------+----------+-------------+
| | | | MASTER |
| | | |SYNCHRONIZING|
+--------+-----------+----------+-------------+
|CLUSTER |writeQuorum|readQuorum| Batman |
+--------+-----------+----------+-------------+
|* | 1 | 1 | X |
|internal| 1 | 1 | |
+--------+-----------+----------+-------------+
[OHazelcastPlugin]
2016-08-18 19:28:49:671 INFO [Robin] Saving distributed configuration file for database 'SocialPosts3' to: /mnt/ebs/orientdb/orientdb-enterprise-2.2.7/databases/SocialPosts3/distributed-config.json [OHazelcastPlugin]
2016-08-18 19:28:49:766 INFO [Robin] Adding node 'Robin' in partition: SocialPosts3 db=[*] v=3 [ODistributedDatabaseImpl$1]
2016-08-18 19:28:49:767 INFO [Robin] New distributed configuration for database: SocialPosts3 (version=3)
CLUSTER CONFIGURATION (LEGEND: X = Owner, o = Copy)
+--------+-----------+----------+-------------+-------------+
| | | | MASTER | MASTER |
| | | |SYNCHRONIZING|SYNCHRONIZING|
+--------+-----------+----------+-------------+-------------+
|CLUSTER |writeQuorum|readQuorum| Batman | Robin |
+--------+-----------+----------+-------------+-------------+
|* | 2 | 1 | X | o |
|internal| 2 | 1 | | |
+--------+-----------+----------+-------------+-------------+
[OHazelcastPlugin]
2016-08-18 19:28:49:767 INFO [Robin] Saving distributed configuration file for database 'SocialPosts3' to: /mnt/ebs/orientdb/orientdb-enterprise-2.2.7/databases/SocialPosts3/distributed-config.json [OHazelcastPlugin]
2016-08-18 19:28:49:769 WARNI [Robin]->[[Batman]] Requesting deploy of database 'SocialPosts3' on local server... [OHazelcastPlugin]
2016-08-18 19:28:52:192 INFO [Robin]<-[Batman] Copying remote database 'SocialPosts3' to: /tmp/orientdb/install_SocialPosts3.zip [OHazelcastPlugin]
2016-08-18 19:28:52:193 INFO [Robin]<-[Batman] Installing database 'SocialPosts3' to: /mnt/ebs/orientdb/orientdb-enterprise-2.2.7/databases/SocialPosts3... [OHazelcastPlugin]
2016-08-18 19:28:52:193 INFO [Robin] - writing chunk #1 offset=0 size=43.38KB [OHazelcastPlugin]
2016-08-18 19:28:52:194 INFO [Robin] Database copied correctly, size=43.38KB [ODistributedAbstractPlugin$3]
2016-08-18 19:28:52:279 WARNI {db=SocialPosts3} Storage 'SocialPosts3' was not closed properly. Will try to recover from write ahead log [OEnterpriseLocalPaginatedStorage]
2016-08-18 19:28:52:279 SEVER {db=SocialPosts3} Restore is not possible because write ahead log is empty. [OEnterpriseLocalPaginatedStorage]
2016-08-18 19:28:52:279 INFO {db=SocialPosts3} Storage data recover was completed [OEnterpriseLocalPaginatedStorage]
2016-08-18 19:28:52:294 INFO {db=SocialPosts3} [Robin] Installed database 'SocialPosts3' (LSN=OLogSequenceNumber{segment=0, position=24}) [OHazelcastPlugin]
2016-08-18 19:28:52:304 INFO [Robin] Reassigning cluster ownership for database SocialPosts3 [OHazelcastPlugin]
2016-08-18 19:28:52:305 INFO [Robin] New distributed configuration for database: SocialPosts3 (version=3)
CLUSTER CONFIGURATION (LEGEND: X = Owner, o = Copy)
+--------+----+-----------+----------+-------------+-------------+
| | | | | MASTER | MASTER |
| | | | |SYNCHRONIZING|SYNCHRONIZING|
+--------+----+-----------+----------+-------------+-------------+
|CLUSTER | id|writeQuorum|readQuorum| Batman | Robin |
+--------+----+-----------+----------+-------------+-------------+
|* | | 2 | 1 | X | o |
|internal| 0| 2 | 1 | | |
+--------+----+-----------+----------+-------------+-------------+
[OHazelcastPlugin]
2016-08-18 19:28:52:305 INFO [Robin] Distributed servers status:
+------+------+------------------------------------+-----+-------------------+---------------+---------------+--------------------------+
|Name |Status|Databases |Conns|StartedOn |Binary |HTTP |UsedMemory |
+------+------+------------------------------------+-----+-------------------+---------------+---------------+--------------------------+
|Batman|ONLINE|GoodBoys=ONLINE (MASTER) |5 |2016-08-16 15:28:23|10.0.0.195:2424|10.0.0.195:2480|426.47MB/509.00MB (83.79%)|
| | |SocialPosts=ONLINE (MASTER) | | | | | |
| | |GratefulDeadConcerts=ONLINE (MASTER)| | | | | |
|Robin*|ONLINE|GoodBoys=ONLINE (MASTER) |3 |2016-08-16 15:29:40|10.0.0.37:2424 |10.0.0.37:2480 |353.77MB/507.50MB (69.71%)|
| | |SocialPosts=ONLINE (MASTER) | | | | | |
| | |GratefulDeadConcerts=ONLINE (MASTER)| | | | | |
| | |SocialPosts3=SYNCHRONIZING (MASTER) | | | | | |
| | |SocialPosts2=ONLINE (MASTER) | | | | | |
+------+------+------------------------------------+-----+-------------------+---------------+---------------+--------------------------+

nova diagnostics in devstack development

In ssh, when I run this command
nova diagnostics 2ad0dda0-072d-46c4-8689-3c487a452248
I got all the resources in devstack
+---------------------------+----------------------+
| Property | Value |
+---------------------------+----------------------+
| cpu0_time | 3766640000000 |
| hdd_errors | 18446744073709551615 |
| hdd_read | 111736 |
| hdd_read_req | 73 |
| hdd_write | 0 |
| hdd_write_req | 0 |
| memory | 2097152 |
| memory-actual | 2097152 |
| memory-available | 1922544 |
| memory-major_fault | 2710 |
| memory-minor_fault | 10061504 |
| memory-rss | 509392 |
| memory-swap_in | 0 |
| memory-swap_out | 0 |
| memory-unused | 1079468 |
| tap5a148e0f-b8_rx | 959777 |
| tap5a148e0f-b8_rx_drop | 0 |
| tap5a148e0f-b8_rx_errors | 0 |
| tap5a148e0f-b8_rx_packets | 8758 |
| tap5a148e0f-b8_tx | 48872 |
| tap5a148e0f-b8_tx_drop | 0 |
| tap5a148e0f-b8_tx_errors | 0 |
| tap5a148e0f-b8_tx_packets | 615 |
| vda_errors | 18446744073709551615 |
| vda_read | 597230592 |
| vda_read_req | 31443 |
| vda_write | 164690944 |
| vda_write_req | 18422 |
+---------------------------+----------------------+
How can I get this in devstack user interfaces.
Please help..
Thanks in advance
its not available in openstack icehouse/juno version though it can be edited in juno to retrieve in devstack.
I didn't use openstack Kilo. In juno, if your hypervisor is libvirt, Vsphere or XenAPI then you can retrive this statistics in devstack UI. for this you have to do this:
For Libvirt
In this location ceilometer/compute/virt/libvirt/inspector.py, add this:
from oslo.utils import units
from ceilometer.compute.pollsters import util
def inspect_memory_usage(self, instance, duration=None):
instance_name = util.instance_name(instance)
domain = self._lookup_by_name(instance_name)
state = domain.info()[0]
if state == libvirt.VIR_DOMAIN_SHUTOFF:
LOG.warn(_('Failed to inspect memory usage of %(instance_name)s, '
'domain is in state of SHUTOFF'),
{'instance_name': instance_name})
return
try:
memory_stats = domain.memoryStats()
if (memory_stats and
memory_stats.get('available') and
memory_stats.get('unused')):
memory_used = (memory_stats.get('available') -
memory_stats.get('unused'))
# Stat provided from libvirt is in KB, converting it to MB.
memory_used = memory_used / units.Ki
return virt_inspector.MemoryUsageStats(usage=memory_used)
else:
LOG.warn(_('Failed to inspect memory usage of '
'%(instance_name)s, can not get info from libvirt'),
{'instance_name': instance_name})
# memoryStats might launch an exception if the method
# is not supported by the underlying hypervisor being
# used by libvirt
except libvirt.libvirtError as e:
LOG.warn(_('Failed to inspect memory usage of %(instance_name)s, '
'can not get info from libvirt: %(error)s'),
{'instance_name': instance_name, 'error': e})
for more details you can check the following link:
https://review.openstack.org/#/c/90498/

Dynamic refresh of a composite

I have a tree viewer next to a specialized viewer. When something is selected in the tree viewer, details about this object are shown in the specialized viewer. TreeViewer tree, Composite control, and MySpecializedViewer viewer are instance variables.
public TheEverythingViewer(Composite parent) {
control = new Composite(parent, SWT.NONE);
control.setLayout(new GridLayout(2, false));
tree = new TreeViewer(control);
tree.setContentProvider(new MyContentProvider());
tree.setLabelProvider(new MyLabelProvider());
tree.setUseHashlookup(true);
tree.getControl().setLayoutData(new GridData(GridData.BEGINNING, GridData.FILL, false, true, 1, 1));
tree.addSelectionChangedListener(new ISelectionChangedListener() {
#Override public void selectionChanged(SelectionChangedEvent event) {
try {
IStructuredSelection sel = (IStructuredSelection) event.getSelection();
MyClass myInput = (MyClass) sel.getFirstElement();
if (viewer != null)
if (!viewer.getControl().isDisposed())
viewer.getControl().dispose();
viewer = new MySpecializedViewer(control, table);
control.getShell().layout();
} catch (Exception e) {
if (viewer != null)
if (!viewer.getControl().isDisposed())
viewer.getControl().dispose();
viewer = null;
}
}
});
}
Am I doing something wrong? I just want:
+--------------+--------------------------------------------+
| + Node | |
| - Node | |
| + Node | My |
| - Node | |
| - Node | Specialized |
| | Viewer |
| | |
| | |
| | |
| | |
| | |
| | |
| | +--------+ |
| | | | |
| | | | |
| | | | |
| | +--------+ |
| | |
| | |
| | |
| | |
+--------------+--------------------------------------------+
The specialized viewer has tables that need to consume more or less space depending on the selected node. And currently, creating a new instance of the specialized viewer is much, much simpler than changing it's input (that wouldn't work ATM).
Yes, you shouldn't be recreating the viewer every time selection changes in your tree, you should just be sending the tree's selection to the existing viewer as its input, at which point it can do whatever you want it to do with the new input. You're also never setting the layout data on your specialized viewer's control, and then forcing the entire shell to re-layout is wasteful.