I have an XML document that contains data like the following:
<Properties>
...
<util>
<parameters>
<parameter name="a">
<externalProvider code="02">
<attribute key="B" value="BB"/>
<attribute key="CC" value="AA"/>
</externalProvider>
</parameter>
<parameter name="B">
<externalProvider code="02">
<attribute key="paramName" value="AVV"/>
<attribute key="applicationName" value="DD"/>
</externalProvider>
</parameter>
</parameters>
</util>
<security>
<permissions>
<parameter name="c">
<externalProvider code="02">
<attribute key="zz" value="cc"/>
<attribute key="dd" value="ddw"/>
</externalProvider>
</parameter>
<parameter name="q">
<externalProvider code="02">
<attribute key="paramName" value="as"/>
<attribute key="saw" value="dd"/>
</externalProvider>
</parameter>
</permissions>
</security>
...
</Properties>
I need to deserialize the XML above in order to take the elements that are in the 'permission' tag.
My probem is that inside '...' there are 6 xml tags that i do not want to define as Java class and so i do not want to define all the hiearchy.
I need to use XStream to deserialize the object.
Can someone tell me how to do it?
Thanks
Related
Need some help understanding why the configuration settings of a service fabric application aren't being overriden by the values defined in the application manifest, as expected. Curently I have some settings defined for my two different environments: the default settings are for the final Azure cluster and I have a custom publish profile for my local dev cluster.
Below what I have for each file:
SampleServFabricApp/ApplicationPackageRoot/ApplicationManifest.xml
<ApplicationManifest xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" ApplicationTypeName="SampleServFabricAppType" ApplicationTypeVersion="1.0.0" xmlns="http://schemas.microsoft.com/2011/01/fabric">
<Parameters>
<Parameter Name="SampleServFabricApp_MessageTransportConfig_CertificateFindValue" DefaultValue="e47c0e4b80e9b83e39e5e1dc35610b6b84a3b764" />
<Parameter Name="SampleServFabricApp_MessageTransportConfig_CertificateRemoteCommonNames" DefaultValue="*.thefinaldomain.com" />
<Parameter Name="SampleServFabricApp_MessageTransportConfig_CertificateRemoteThumbprints" DefaultValue="e47c0e4b80e9b83e39e5e1dc35610b6b84a3b764" />
<Parameter Name="SampleServFabricApp_PartitionCount" DefaultValue="10" />
<Parameter Name="SampleServFabricApp_MinReplicaSetSize" DefaultValue="3" />
<Parameter Name="SampleServFabricApp_TargetReplicaSetSize" DefaultValue="3" />
</Parameters>
<ServiceManifestImport>
<ServiceManifestRef ServiceManifestName="SampleServFabricApp.EndpointPkg" ServiceManifestVersion="1.0.0" />
<ConfigOverrides>
<ConfigOverride Name="Config">
<Settings>
<Section Name="SampleServFabricApp_MessageTransportConfig">
<Parameter Name="CertificateFindValue" Value="[SampleServFabricApp_MessageTransportConfig_CertificateFindValue]" />
<Parameter Name="CertificateRemoteCommonNames" Value="[SampleServFabricApp_MessageTransportConfig_CertificateRemoteCommonNames]" />
<Parameter Name="CertificateRemoteThumbprints" Value="[SampleServFabricApp_MessageTransportConfig_CertificateRemoteThumbprints]" />
</Section>
</Settings>
</ConfigOverride>
</ConfigOverrides>
</ServiceManifestImport>
<DefaultServices>
<Service Name="SampleServFabricAppActorService" GeneratedIdRef="e07529c2-2426-4065-b621-90033a78704c|Persisted">
<StatefulService ServiceTypeName="SampleServFabricAppActorServiceType" TargetReplicaSetSize="[SampleServFabricApp_TargetReplicaSetSize]" MinReplicaSetSize="[SampleServFabricApp_MinReplicaSetSize]">
<UniformInt64Partition PartitionCount="[SampleServFabricApp_PartitionCount]" LowKey="-9223372036854775808" HighKey="9223372036854775807" />
</StatefulService>
</Service>
</DefaultServices>
</ApplicationManifest>
SampleServFabricApp/ApplicationParameters/dev_cluster.xml
<?xml version="1.0" encoding="utf-8"?>
<Application xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" Name="fabric:/SampleServFabricApp" xmlns="http://schemas.microsoft.com/2011/01/fabric">
<Parameters>
<Parameter Name="SampleServFabricApp_MessageTransportConfig_CertificateFindValue" Value="4826f9a3ac95bca949fab19ea136e197" />
<Parameter Name="SampleServFabricApp_MessageTransportConfig_CertificateRemoteCommonNames" Value="ServiceFabricDevClusterCert" />
<Parameter Name="SampleServFabricApp_MessageTransportConfig_CertificateRemoteThumbprints" Value="4826f9a3ac95bca949fab19ea136e197" />
</Parameters>
</Application>
SampleServFabricApp.Endpoint/PackageRoot/Config/Settings.xml
<?xml version="1.0" encoding="utf-8"?>
<Settings xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://schemas.microsoft.com/2011/01/fabric">
<Section Name="SampleServFabricApp_MessageTransportConfig">
<Parameter Name="MaxMessageSize" Value="10000000" />
<Parameter Name="SecurityCredentialsType" Value="X509" />
<Parameter Name="CertificateFindType" Value="FindByThumbprint" />
<Parameter Name="CertificateFindValue" Value="e47c0e4b80e9b83e39e5e1dc35610b6b84a3b764" />
<Parameter Name="CertificateStoreLocation" Value="LocalMachine" />
<Parameter Name="CertificateStoreName" Value="My" />
<Parameter Name="CertificateProtectionLevel" Value="EncryptAndSign" />
<Parameter Name="CertificateRemoteCommonNames" Value="*.thefinaldomain.com" />
<Parameter Name="CertificateRemoteThumbprints" Value="e47c0e4b80e9b83e39e5e1dc35610b6b84a3b764" />
</Section>
</Settings>
When publishing the application to the local cluster, I can see in the output the log when creating the application and it seems the parameters are ok:
2>Creating application... 2> 2> 2>ApplicationName :
fabric:/SampleServFabricApp 2>ApplicationTypeName :
SampleServFabricAppType 2>ApplicationTypeVersion : 1.0.0
2>ApplicationParameters : {
"SampleServFabricApp_MessageTransportConfig_CertificateRemoteThumbprints"
= 2> "4826f9a3ac95bca949fab19ea136e197"; 2> "SampleServFabricApp_MessageTransportConfig_CertificateRemoteCommonNames"
= 2> "ServiceFabricDevClusterCert"; 2> "SampleServFabricApp_MessageTransportConfig_CertificateFindValue" =
2> "4826f9a3ac95bca949fab19ea136e197"; } 2>
2>Create application succeeded.
But using the service explorer I can see that errors occurred during the application startup and if I check the event log I can see the following error:
failed to set security settings to { provider=SSL
protection=EncryptAndSign certType = '' store='LocalMachine/My'
findValue='FindByThumbprint:e47c0e4b80e9b83e39e5e1dc35610b6b84a3b764'
remoteCertThumbprints='e47c0e4b80e9b83e39e5e1dc35610b6b84a3b764'
remoteX509Names=('*.thefinaldomain.com',issuer=)
certChainFlags=40000000 isClientRoleInEffect=false
claimBasedClientAuthEnabled=false }: FABRIC_E_CERTIFICATE_NOT_FOUND
Some things I've already tried:
- Used Service Fabric explorer and confirmed that the Parameters under the Details tab of the application are correct and match the parameters that have been sent to the deployment script according to the Output window;
- Confirmed that the contents of the "Settings.xml" file under "C:\SfDevCluster\Data_App_Node_0\SampleServFabricAppType_App18\SampleServFabricApp.EndpointPkg.Config.1.0.0" are equal to the contents of the "SampleServFabricApp.Endpoint/PackageRoot/Config/Settings.xml";
Any idea if this is a bug or if its missing something that I truly cannot see?
As you can see it seems that the deployment process assumes the correct overridden values, but the application doesn't work and event viewer shows the default values instead of the ones used during deployment.
Thanks.
I just had a reply to the issue I opened on GitHub. It seems the described behavior is known and it will be fixed on the SDK 3.3 release. For all of you that may want to check it, you can find the issue here
We try to use Kafka with WSO2 ESB server.
We've implemented and API which puts incoming messages to Kafka.
Then we've implemented an inbound endpoint that retrieves messages from Kafka and transfer these messages to other external sytems.
Everything works very well in happy path but when we test the "external systems down" scenario failed messages are not delivered when external systems up again.
How can we send failed messages to external systems?
API config:
<?xml version="1.0" encoding="UTF-8"?>
<api context="/api/event" name="EventAPI" xmlns="http://ws.apache.org/ns/synapse">
<resource methods="POST">
<inSequence>
<log category="DEBUG" description="" level="full"/>
<kafkaTransport.init>
<bootstrapServers>localhost:9092</bootstrapServers>
<keySerializerClass>org.apache.kafka.common.serialization.StringSerializer</keySerializerClass>
<valueSerializerClass>org.apache.kafka.common.serialization.StringSerializer</valueSerializerClass>
<maxPoolSize>20</maxPoolSize>
</kafkaTransport.init>
<kafkaTransport.publishMessages>
<topic>event_topic</topic>
</kafkaTransport.publishMessages>
<loopback/>
</inSequence>
<outSequence>
<payloadFactory media-type="json">
<format>{"result" : "OK"}</format>
</payloadFactory>
<property name="messageType" scope="axis2" type="STRING" value="application/json"/>
<send/>
</outSequence>
</resource>
</api>
Inbound config:
<?xml version="1.0" encoding="UTF-8"?>
<inboundEndpoint name="EventTransmitter" protocol="kafka"
sequence="transmit_sequence" suspend="false" onError="fault"
xmlns="http://ws.apache.org/ns/synapse">
<parameters>
<parameter name="interval">10</parameter>
<parameter name="coordination">true</parameter>
<parameter name="sequential">true</parameter>
<parameter name="zookeeper.connect">localhost:2181</parameter>
<parameter name="consumer.type">highlevel</parameter>
<parameter name="content.type">application/json</parameter>
<parameter name="topics">event_topic</parameter>
<parameter name="group.id">myconsumer</parameter>
<parameter name="consumer.id">myconsumer</parameter>
<parameter name="dual.commit.enabled">true</parameter>
<parameter name="auto.offset.reset">largest</parameter>
</parameters>
</inboundEndpoint>
Sequence:
<?xml version="1.0" encoding="UTF-8"?>
<sequence name="transmit_sequence" onError="fault" trace="disable" xmlns="http://ws.apache.org/ns/synapse">
<send receive="event_transmit_out_sequence">
<endpoint key="gov:endpoints/HandlerEndpoint.xml"/>
</send>
</sequence>
Hello thank you for reading this question. I have orient db set up and would like to add a second server and have them replicate between each other. When one goes on its own it runs fine and I have been using it for months. When enabling the hazelcast plugin, the servers start communicating, and I can see they start to talk to each other. An error happens when they try to write to each other though. This is the same issue discussed here:
https://groups.google.com/forum/#!topic/orient-database/QpZPG4y_KpU
For what its worth, I have both of these servers deployed on the same machine, each with their own embedded database. The database paths are plocal:/home/chris/dbs/db2 and plocal:/home/chris/dbs/db1
2015-10-08 08:56:14:048 INFO [db2-orient] Saving distributed configuration file for database 'db' to: ./databases/db/distributed-config.json [OHazelcastPlugin]
2015-10-08 08:56:14:049 INFO [db2-orient] received new status idp2-orient.idp=SYNCHRONIZING [OHazelcastPlugin]
2015-10-08 08:56:18:054 WARNING [db2-orient]->[[db1-orient]] requesting deploy of database 'db' on local server... [OHazelcastPlugin]
Then on the other server, the one that started first, I see
[OHazelcastPlugin]{db=db} [db1-orient]<-[db2-orient] error on executing distributed request 0: deploy_db
com.orientechnologies.orient.server.distributed.ODistributedException: Distributed storage was not installed for database 'db'. Implementation found: com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage
at com.orientechnologies.orient.server.hazelcast.OHazelcastPlugin.executeOnLocalNode(OHazelcastPlugin.java:745)
at com.orientechnologies.orient.server.hazelcast.ODistributedWorker.onMessage(ODistributedWorker.java:298)
at com.orientechnologies.orient.server.hazelcast.ODistributedWorker.run(ODistributedWorker.java:121)
I put a breakpoint on the line that throws that exception, and the storage type present at that time is indeed OLocalPaginatedStorage. My orientDB version is 2.0.15.
My distributed config. (Same on both servers)
{
"autoDeploy": true,
"hotAlignment": false,
"executionMode": "undefined",
"readQuorum": 1,
"writeQuorum": 2,
"failureAvailableNodesLessQuorum": false,
"readYourWrites": true,
"clusters": {
"internal": {
},
"index": {
},
"*": {
"servers" : [ "<NEW_NODE>" ]
}
}
}
This is how I start my server. It is embedded, and started via a java application.
OServer server = OServerMain.create(true);
OPartitionedDatabasePool pool = server.startup(config.toString()).activate().getDatabasePoolFactory().get(dbPath, OUser.ADMIN, OUser.ADMIN);
The config the server uses to start up is this
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<orient-server>
<handlers>
<handler class="com.orientechnologies.orient.server.hazelcast.OHazelcastPlugin">
<parameters>
<parameter name="nodeName" value="db2-orient" />
<parameter name="enabled" value="true" />
<parameter name="configuration.db.default" value="${orientDBConfigs}/orientdb-default-distributed-db-config.json" />
<parameter name="configuration.hazelcast" value="${orientDBConfigs}/orientdb-hazelcast.xml" />
<parameter name="conflict.resolver.impl" value="com.orientechnologies.orient.server.distributed.conflict.ODefaultReplicationConflictResolver" />
<parameter name="sharding.strategy.round-robin" value="com.orientechnologies.orient.server.hazelcast.sharding.strategy.ORoundRobinPartitioninStrategy" />
</parameters>
</handler>
<handler class="com.orientechnologies.orient.server.handler.OAutomaticBackup">
<parameters>
<parameter name="enabled" value="false" />
<parameter name="delay" value="4h" />
<parameter name="target.directory" value="backup" />
<parameter name="target.fileName" value="${DBNAME}-${DATE:yyyyMMddHHmmss}.json" />
<parameter name="db.include" value="" />
<parameter name="db.exclude" value="" />
</parameters>
</handler>
<handler class="com.orientechnologies.orient.server.plugin.mail.OMailPlugin">
<parameters>
<parameter name="enabled" value="false" />
<parameter name="profile.default.mail.smtp.host" value="localhost" />
<parameter name="profile.default.mail.smtp.port" value="25" />
<parameter name="profile.default.mail.smtp.auth" value="true" />
<parameter name="profile.default.mail.smtp.starttls.enable" value="true" />
<parameter name="profile.default.mail.smtp.user" value="" />
<parameter name="profile.default.mail.smtp.password" value="" />
<parameter name="profile.default.mail.date.format" value="yyyy-MM-dd HH:mm:ss" />
</parameters>
</handler>
<handler class="com.orientechnologies.orient.server.handler.OServerSideScriptInterpreter">
<parameters>
<parameter name="enabled" value="false" />
</parameters>
</handler>
</handlers>
<network>
<protocols>
<protocol name="binary" implementation="com.orientechnologies.orient.server.network.protocol.binary.ONetworkProtocolBinary" />
</protocols>
<listeners>
<listener protocol="binary" ip-address="0.0.0.0" port-range="2424-2430" />
</listeners>
<cluster>
</cluster>
</network>
<storages>
<storage name="${dbName}" path="${dbPath}" loaded-at-startup="true" />
</storages>
<users>
<user name="root" password="root" resources="*"/>
</users>
<properties>
<entry name="db.pool.min" value="1" />
<entry name="db.pool.max" value="20" />
<entry name="cache.level1.enabled" value="false" />
<entry name="cache.level1.size" value="1000" />
<entry name="cache.level2.enabled" value="true" />
<entry name="cache.level2.size" value="1000" />
<entry name="profiler.enabled" value="true" />
<entry name="log.console.level" value="info" />
<entry name="log.file.level" value="fine" />
<entry name="plugin.dynamic" value="false"/>
</properties>
Thanks again.
as wolf4ood pointed out, the storage type is replaced by the hazelcast plugin in the hazelcast plugin's onOpen method. Paginated storage gets switched to Distributed. This replacement does not happen if the path doesnt start with "plocal:./databases". Solution: make the path start with that. I have no idea if this is a good idea or not. That check seems to be in there for a reason, the comments in the code seem to indicate something about running on the same jvm.
I'm using wso2esb 4.7.0 and postgresql and inserting values in two different tables of same database through proxy service.This insertion operation in working properly.Now my scenario is if my second insertion fails,first insertion should rollback.for that i have use transaction mediator(rollback transaction) but it is not working properly.
proxy configuration and fault sequence as follows:
<target >
<inSequence onError="myFaultHandler">
<transaction action="new"/>
<dbreport>
<connection>
<pool>
<password>Youtility11</password>
<user>youtilitydba</user>
<url>jdbx:postgresql://localhost:5432/DB2</url>
<driver>org.postgresql.Driver</driver>
</pool>
</connection>
<statement>
<sql>
insert into table1(name,id) values(?,?)</sql>
<parameter xmlns:ns="http://org.apache.synapse/xsd"
expression="//name/text()"
type="VARCHAR"/>
<parameter xmlns:ns="http://org.apache.synapse/xsd"
expression="//id/text()"
type="VARCHAR"/>
</statement>
</dbreport>
<log level="full">
<property name="name" expression="get-property('name')"/>
<property name="id" expression="get-property('id')"/>
</log>
<log level="full">
<property name="text" value="Reporting to the DB2"/>
</log>
<dbreport>
<connection>
<pool>
<password>Youtility11</password>
<user>youtilitydba</user>
<url>jdbx:postgresql://localhost:5432/DB2</url>
<driver>org.postgresql.Driver</driver>
</pool>
</connection>
<statement>
<sql>
insert into table2(firstname,lastname) values(?,?)</sql>
<parameter xmlns:ns="http://org.apache.synapse/xsd"
expression="//firstname/text()"
type="VARCHAR"/>
<parameter xmlns:ns="http://org.apache.synapse/xsd"
expression="//lastname/text()"
type="VARCHAR"/>
</statement>
</dbreport>
</inSequence>
</target>
and sequence is:
<sequence xmlns="http://ws.apache.org/ns/synapse" name="myFaultHandler">
<log level="full">
<property name="message" value="**ROLLBACK**"/>
</log>
<transaction action="rollback"/>
</sequence>
here when i'm inserting wrong record in second table it shows error but first table insertion is done.why rollback is not working? let me know..
thanks in advance
I want to create a custom report which will display all associated Products of all/any Order. That means in Order page its showing all Orders and associated Products, and in any Order record page it should display only the Products associated with that Order.
Previously I did that using report wizard.It was working as I wanted.
But I am not able to do it in Business Intelligence Development Studio.
this is the FetchXML
<fetch version="1.0" output-format="xml-platform" mapping="logical" distinct="false">
<entity name="salesorderdetail">
<attribute name="productid" />
<attribute name="productdescription" />
<attribute name="priceperunit" />
<attribute name="quantity" />
<attribute name="extendedamount" />
<attribute name="salesorderdetailid" />
<order attribute="productid" descending="false" />
<link-entity name="salesorder" from="salesorderid" to="salesorderid" alias="ad">
<filter type="and">
<condition attribute="salesorderid" operator="eq">
</condition>
</filter>
</link-entity>
</entity>
</fetch>
How I can modify this XML so that it will work like above?
Finally, I am able to do it, using sub query.
This it the Sub query fetchXML:
<fetch version="1.0" output-format="xml-platform" mapping="logical" distinct="false">
<entity name="salesorderdetail">
<attribute name="productid" />
<attribute name="productdescription" />
<attribute name="priceperunit" />
<attribute name="quantity" />
<attribute name="extendedamount" />
<attribute name="salesorderdetailid" />
<order attribute="productid" descending="false" />
<link-entity name="salesorder" from="salesorderid" to="salesorderid" alias="af">
<filter type="and">
<condition attribute="salesorderid" operator="eq" uitype="salesorder" value="#salesorderid"/>
</filter>
</link-entity>
</entity>
</fetch>
and this is the fetchXML of the main query:
<fetch version="1.0" output-format="xml-platform" mapping="logical" distinct="false">
<entity name="salesorder" enableprefiltering="1">
<attribute name="name" />
<attribute name="salesorderid" />
<order attribute="name" descending="false" />
<filter type="and">
<condition attribute="ownerid" operator="eq-userid" />
<condition attribute="statecode" operator="in">
<value>0</value>
<value>1</value>
</condition>
</filter>
</entity>
</fetch>
Hope this will help someone.