Writing Multiple Segments using BeanIO throws error as indeterminate size - bean-io

Please help as writing multiple Segment throws error
"A segment of indeterminate size may not follow another component of indeterminate size"
Sample XML config is
<field name="noOfShipmentContents" type="Integer" />
<segment name="shipmentContentsPart2"
class="com.ShipmentContentsPart2"
collection="list" minOccurs="1" maxOccurs="unbounded">
<field name="shipmentContents" type="String" nillable="true" />
</segment>
<field name="noOfSpecialServices" type="Integer" />
<segment name="specialServicesPart3"
class="com.SpecialServicePart3"
collection="list" minOccurs="0" maxOccurs="unbounded">
<field name="chrgServCode" type="String" nillable="true" />
<field name="chrgAmt" type="String" nillable="true" />
</segment>
</record>
beanio.jar version 2.0.7 and 2.1.0 Both gives same error
What JDK version?
1.6.0.35

Got answer from developer of beanIO Kevin (Thanks) to use occursRef="[name of field]" on segments whose occurrences depend on a preceding field in the same record.
Trick is configuring
<field name="noOfSpecialServices" type="Integer" />
<segment name="specialServicesPart3" class="com.SpecialServicePart3"
collection="list" occursRef="noOfSpecialServices">
This feature is available in beanIO2.1.x

Related

OrientDB Embedded and Distributed error: Distributed Storage was not installed

Hello thank you for reading this question. I have orient db set up and would like to add a second server and have them replicate between each other. When one goes on its own it runs fine and I have been using it for months. When enabling the hazelcast plugin, the servers start communicating, and I can see they start to talk to each other. An error happens when they try to write to each other though. This is the same issue discussed here:
https://groups.google.com/forum/#!topic/orient-database/QpZPG4y_KpU
For what its worth, I have both of these servers deployed on the same machine, each with their own embedded database. The database paths are plocal:/home/chris/dbs/db2 and plocal:/home/chris/dbs/db1
2015-10-08 08:56:14:048 INFO [db2-orient] Saving distributed configuration file for database 'db' to: ./databases/db/distributed-config.json [OHazelcastPlugin]
2015-10-08 08:56:14:049 INFO [db2-orient] received new status idp2-orient.idp=SYNCHRONIZING [OHazelcastPlugin]
2015-10-08 08:56:18:054 WARNING [db2-orient]->[[db1-orient]] requesting deploy of database 'db' on local server... [OHazelcastPlugin]
Then on the other server, the one that started first, I see
[OHazelcastPlugin]{db=db} [db1-orient]<-[db2-orient] error on executing distributed request 0: deploy_db
com.orientechnologies.orient.server.distributed.ODistributedException: Distributed storage was not installed for database 'db'. Implementation found: com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage
at com.orientechnologies.orient.server.hazelcast.OHazelcastPlugin.executeOnLocalNode(OHazelcastPlugin.java:745)
at com.orientechnologies.orient.server.hazelcast.ODistributedWorker.onMessage(ODistributedWorker.java:298)
at com.orientechnologies.orient.server.hazelcast.ODistributedWorker.run(ODistributedWorker.java:121)
I put a breakpoint on the line that throws that exception, and the storage type present at that time is indeed OLocalPaginatedStorage. My orientDB version is 2.0.15.
My distributed config. (Same on both servers)
{
"autoDeploy": true,
"hotAlignment": false,
"executionMode": "undefined",
"readQuorum": 1,
"writeQuorum": 2,
"failureAvailableNodesLessQuorum": false,
"readYourWrites": true,
"clusters": {
"internal": {
},
"index": {
},
"*": {
"servers" : [ "<NEW_NODE>" ]
}
}
}
This is how I start my server. It is embedded, and started via a java application.
OServer server = OServerMain.create(true);
OPartitionedDatabasePool pool = server.startup(config.toString()).activate().getDatabasePoolFactory().get(dbPath, OUser.ADMIN, OUser.ADMIN);
The config the server uses to start up is this
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<orient-server>
<handlers>
<handler class="com.orientechnologies.orient.server.hazelcast.OHazelcastPlugin">
<parameters>
<parameter name="nodeName" value="db2-orient" />
<parameter name="enabled" value="true" />
<parameter name="configuration.db.default" value="${orientDBConfigs}/orientdb-default-distributed-db-config.json" />
<parameter name="configuration.hazelcast" value="${orientDBConfigs}/orientdb-hazelcast.xml" />
<parameter name="conflict.resolver.impl" value="com.orientechnologies.orient.server.distributed.conflict.ODefaultReplicationConflictResolver" />
<parameter name="sharding.strategy.round-robin" value="com.orientechnologies.orient.server.hazelcast.sharding.strategy.ORoundRobinPartitioninStrategy" />
</parameters>
</handler>
<handler class="com.orientechnologies.orient.server.handler.OAutomaticBackup">
<parameters>
<parameter name="enabled" value="false" />
<parameter name="delay" value="4h" />
<parameter name="target.directory" value="backup" />
<parameter name="target.fileName" value="${DBNAME}-${DATE:yyyyMMddHHmmss}.json" />
<parameter name="db.include" value="" />
<parameter name="db.exclude" value="" />
</parameters>
</handler>
<handler class="com.orientechnologies.orient.server.plugin.mail.OMailPlugin">
<parameters>
<parameter name="enabled" value="false" />
<parameter name="profile.default.mail.smtp.host" value="localhost" />
<parameter name="profile.default.mail.smtp.port" value="25" />
<parameter name="profile.default.mail.smtp.auth" value="true" />
<parameter name="profile.default.mail.smtp.starttls.enable" value="true" />
<parameter name="profile.default.mail.smtp.user" value="" />
<parameter name="profile.default.mail.smtp.password" value="" />
<parameter name="profile.default.mail.date.format" value="yyyy-MM-dd HH:mm:ss" />
</parameters>
</handler>
<handler class="com.orientechnologies.orient.server.handler.OServerSideScriptInterpreter">
<parameters>
<parameter name="enabled" value="false" />
</parameters>
</handler>
</handlers>
<network>
<protocols>
<protocol name="binary" implementation="com.orientechnologies.orient.server.network.protocol.binary.ONetworkProtocolBinary" />
</protocols>
<listeners>
<listener protocol="binary" ip-address="0.0.0.0" port-range="2424-2430" />
</listeners>
<cluster>
</cluster>
</network>
<storages>
<storage name="${dbName}" path="${dbPath}" loaded-at-startup="true" />
</storages>
<users>
<user name="root" password="root" resources="*"/>
</users>
<properties>
<entry name="db.pool.min" value="1" />
<entry name="db.pool.max" value="20" />
<entry name="cache.level1.enabled" value="false" />
<entry name="cache.level1.size" value="1000" />
<entry name="cache.level2.enabled" value="true" />
<entry name="cache.level2.size" value="1000" />
<entry name="profiler.enabled" value="true" />
<entry name="log.console.level" value="info" />
<entry name="log.file.level" value="fine" />
<entry name="plugin.dynamic" value="false"/>
</properties>
Thanks again.
as wolf4ood pointed out, the storage type is replaced by the hazelcast plugin in the hazelcast plugin's onOpen method. Paginated storage gets switched to Distributed. This replacement does not happen if the path doesnt start with "plocal:./databases". Solution: make the path start with that. I have no idea if this is a good idea or not. That check seems to be in there for a reason, the comments in the code seem to indicate something about running on the same jvm.

How to configure WildFly 8.2 to use AccessLogHandler

I have found that there is a handler io.undertow.server.handlers.accesslog.AccessLogHandler that can log http access.
However I am not able to configure it so it will produce any log messages.
Here is a code snippet from my standalone.xml:
<filter class-name="io.undertow.server.handlers.accesslog.AccessLogHandler" name="access-log-handler" module="io.undertow.core">
<param name="formatString" value="common"/>
<param name="accessLogReceiver" value="io.undertow.server.handlers.accesslog.JBossLoggingAccessLogReceiver"/>
</filter>
My question is how to configure that handler so it will start producing log messages.
no need to add custom filter for access log. All you need is to configure access log in subsystem itself.
This would be an example:
<host name="default-host" >
<location name="/" handler="welcome-content">
....
<access-log />
</host>
which will by default log in to log folder with prefix access_.log
you can also customize various things, from xsd:
<xs:attribute name="pattern" use="optional" type="xs:string" default="common"/>
<xs:attribute name="worker" use="optional" type="xs:string" default="default"/>
<xs:attribute name="directory" use="optional" type="xs:string" default="${jboss.server.log.dir}"/>
<xs:attribute name="relative-to" use="optional" type="xs:string" />
<xs:attribute name="prefix" use="optional" type="xs:string" default="access_log"/>
<xs:attribute name="suffix" use="optional" type="xs:string" default=".log"/>
I missed to add this xml snippet (StackOverflow):
<host name="default-host" >
.....
<filter-ref name="access-log-handler"/>
</host>
And then I got this:
Caused by: java.lang.NoSuchMethodException: io.undertow.server.handlers.accesslog.AccessLogHandler.<init>(io.undertow.server.HttpHandler)"}}
Which is a known bug: see this, or this
It is possible to use jboss-cli to add a handler and see how the standalone.xml changed:
/subsystem=undertow/configuration=filter/custom-filter=access-log-handler:add(class-name=io.undertow.server.handlers.accesslog.AccessLogHandler, module=io.undertow.core)
/subsystem=undertow/server=default-server/host=default-host/filter-ref=access-log-handler:add

AutoFixture EF entity constraints

Is it possbile to configure AutoFixture so that it adheres the entity constraints [from the EDMX file]?
E.g. Consider a snippet from the CSDL section of my EDMX file:
<EntityType Name="RndtAd">
...
<Property Name="AD" Type="Decimal" Precision="12" Scale="0" Nullable="false" />
<Property Name="USERNAME" Type="String" MaxLength="255" FixedLength="false" Unicode="true" />
<Property Name="VERSION" Type="Decimal" Precision="12" Scale="4" Nullable="false" />
<Property Name="EFFECTIVE_FROM" Type="DateTime" Precision="3" />
<Property Name="EFFECTIVE_FROM_TZ" Type="DateTime" Precision="7" />
<Property Name="EFFECTIVE_TILL" Type="DateTime" Precision="3" />
<Property Name="EFFECTIVE_TILL_TZ" Type="DateTime" Precision="7" />
<Property Name="IS_TEMPLATE" Type="String" MaxLength="1" FixedLength="true" Unicode="false" />
<Property Name="IS_USER" Type="String" MaxLength="1" FixedLength="true" Unicode="false" />
<Property Name="STRUCT_CREATED" Type="String" MaxLength="1" FixedLength="true" Unicode="false" />
<Property Name="AD_TP" Type="String" MaxLength="20" FixedLength="false" Unicode="true" />
<Property Name="PERSON" Type="String" MaxLength="40" FixedLength="false" Unicode="true" />
<Property Name="TITLE" Type="String" MaxLength="20" FixedLength="false" Unicode="true" />
<Property Name="FUNCTION_NAME" Type="String" MaxLength="20" FixedLength="false" Unicode="true" />
<Property Name="COMPANY" Type="String" MaxLength="40" FixedLength="false" Unicode="true" />
<Property Name="STREET" Type="String" MaxLength="40" FixedLength="false" Unicode="true" />
...
What I would like if fixture.Create<RndtAd>() generated randomly an entity where all the previous constraints are satisfied.
What options do I have? All suggestions are welcome.
EDIT. I'm not bound to AutoFixture. If there is another tool which does the job, I'm ok with that too.
AutoFixture has no built-in support for Entity Framework, but during the last couple of years, several people have fought their own battles to integrate the two.
Here's what a Google search turned up for me:
Autofixture and Moq to test Entity Framework project
AutoFixture.AutoEntityFramework
Creating a domain model without circular references in Entity Framework
Using autofixture in my data integration tests to create proxies
How to mockup Entity Framework 6 With Moq & Autofixture
Perhaps you can find some inspiration by looking some of those resources over.
As-is, AutoFixture can't be customized through .EDMX files.

Solr DataImportHandler ERROR DocBuilder Exception while processing

I have been trying to get Solr DIH working with PostgreSQL for hours now and I cannot find the problem, as the Logger doesn't not tell me anthing helpful.
My aim is as simple as to synchronize the data from the database with Solr (using the DIH).
My setup is as follows:
Jetty, Windows 8
solrconfig.xml (nothing changed except for the following)
[...]
<lib dir="../../../../dist/" regex="solr-dataimporthandler-.*\.jar" />
<lib dir="../../../../dist/" regex="sqljdbc4.*\.jar" />
<lib dir="../../../../dist/" regex="postgresql-.*\.jar" />
[...]
data-config.xml
<dataConfig>
<dataSource type="JdbcDataSource"
driver="org.postgresql.Driver"
url="jdbc:postgresql://localhost:5432/solrdih"
user="solrdih"
password="solrdih"
batchSize="100" />
<document>
<entity name="solrdih"
query="SELECT * FROM myTable">
<field column="id" name="id" />
</entity>
</document>
</dataConfig>
schema.xml (nothing changed except for the following)
[...]
<fields>
<field name="id" type="string" indexed="true" stored="true" required="true" multiValued="false" />
<field name="name" type="text" indexed="true" stored="true"/>
<field name="description" type="text" indexed="true" stored="true"/>
[...]
Calling http://localhost:8983/solr/solr/dataimport, I get the following:
It reads:
ERROR DocBuilder Exception while processing: solrdih document : SolrInputDocument[]:org.apache.solr.handler.dataimport.DataImportHandlerException: Unable to execute query: SELECT * FROM myTable Processing Document # 1
ERROR DataImporter Full Import failed:java.lang.RuntimeException: java.lang.RuntimeException: org.apache.solr.handler.dataimport.DataImportHandlerException: Unable to execute query: SELECT * FROM myTable Processing Document # 1
Could someone please provide hints where to look for the error?
Thanks in advance!
So, this error came from all the way down in Postgres and everything works fine since I made changes to pg_hba.conf
host all all 127.0.0.1/32 md5
to
host all all 127.0.0.1/32 trust

samlp:RequestAbstractType - Trying to understand the ExtensionsType

According to SAML 2.0, a RequestAbstractType is defined in the following way:
<complexType name="RequestAbstractType" abstract="true">
<sequence>
<element ref="saml:Issuer" minOccurs="0"/>
<element ref="ds:Signature" minOccurs="0"/>
<element ref="samlp:Extensions" minOccurs="0"/>
</sequence>
<attribute name="ID" type="ID" use="required"/>
<attribute name="Version" type="string" use="required"/>
<attribute name="IssueInstant" type="dateTime" use="required"/>
<attribute name="Destination" type="anyURI" use="optional"/>
<attribute name="Consent" type="anyURI" use="optional"/>
</complexType>
What I'm interested in is the Extensions element, which is defined as:
<element name="Extensions" type="samlp:ExtensionsType"/>
<complexType name="ExtensionsType">
<sequence>
<any namespace="##other" processContents="lax" maxOccurs="unbounded"/>
</sequence>
</complexType>
How would I add/implement such an extension? I have no clue how to extend the RequestAbstractType.
The element allows you to include anything you want within it. Adding and processing of any data within that element would depend on your SAML product.
To give you an example of how it's used, here's a spec that leveraged it: http://docs.oasis-open.org/security/saml/SpecDrafts-Post2.0/sstc-saml-protocol-ext-rac.pdf