Sybase is loading incorrect numeric data sent by Spring batch - Jdbc Template - spring-batch

I have a ETL job built using spring batch and DAO layer uses Spring's jdbc template.The issue is with loading of numeric datatype. When the batch is running for large number of records, good amount of numeric values(not all) will be loaded incorrectly(pattern is that value will be multiplied by 10^scale).
I am using batchUpdate method and preparedStatement. The driver used is jconn4.jar from sybase version 7.0.7.
I am getting the values printed while setting the ps, and do not see the values being manipulated at java side.
Could someone please advice on what could be causing this.
Thanks in advance.
Edit: Further info Sybase version 15.7, Spring core and jdbc version 4.2.6, Spring batch version 3.0.4, java version 8
Also has someone used sybPreparedStatement from jConnect library? I found a sybase infocenter link where they recommend using it perticularly when using numeric data types but I do find the documentation around how to use this ps insignificant. Could you pl share if you have tried using SybPS and what were the challenges, and if you could successfully use that.

Related

Extra '\' is inserting into quartz tables bytea column by using npgSQL and PostgreSQL database

When inserting data into PostgreSQL (PG) database (version 11), from a WPF application (C# and nhibernate) using Npgsql v1.0.0/4.1.5 provider, data is getting saved in an incorrect format in the ‘bytea’ type column of the PG database. For troubleshooting purpose, just to see what data is getting inserted, we converted the bytea column to text type, and we observed that there are additional double back slashes that are getting introduced in within data which we do not want. We want to identify the root cause of this incorrect format that is being inserted and how to correct it.
Error1: Couldn't retrieve job: The input stream is not a valid binary format. The starting contents (in bytes) are: 5C-30-30-30-31-30-30-30-30-30-30-66-66-66-66-66-66 ...
Error2: Couldn't retrieve job: Destination array was not long enough. Check destIndex and length, and the array's lower bounds.
Support Needed: need help in identifying the problematic dll or pinpoint the dependency issue and to get the Quartz implementation working with minor changes to the application.
Components used in the application:
Castle.windsor 2.5.2
Dotconnect for PostgreSQL
Quartz 1.0.3
WPF
Npgsql 4.1.5
.Net Framework 4.6.1
PostgreSQL 11

Using Slick with Kudu/Impala

Kudu tables can be accessed via Impala thus its jdbc driver. Thanks to that it is accessable via standard java/scala jdbc api. I was wondering if it is possible to use slick for it. Or if not is any other high level scala db framework supporting impla/kudu.
Slick can be used with any JDBC database
http://slick.lightbend.com/doc/3.3.0/database.html
At least, for me, Slick is not fully compatible with Impala Kudu. Using Slick, I can not modify db entities, can not create, update or delete any item. It works only to read data.
There are two ways you could use Slick with an arbitrary JDBC driver (and SQL dialect).
The first is to use low-level JDBC calls. The SimpleDBIO class gives you access to a JDBC connection:
val getAutoCommit = SimpleDBIO[Boolean](_.connection.getAutoCommit)
That example is from the Slick manual.
However, I think you're more interested in working at a higher level than that. In that case, for Slick, you'd need to implement a custom Profile. If Impala is similar enough to an existing database profile, you may be able to extend an existing profile and adjust it to account for any differences. For example, this would allow you to customize how SQL is formatted for Impala, how timestamps are represented, how column names are quoted. The documentation on Porting SQL from Other Database Systems to Impala would give you an idea of what needs to change in a driver.
Or if not is any other high level scala db framework supporting impla/kudu.
None of the main-stream libraries seem to support Impala as a feature. Having said that, the Doobie documentation mentions customising connections for Hive. So Doobie may be worth quickly trying Doobie to see if you can query and insert, for example.

How to use Solr on Postgresql and index a table

I am new to Solr with the specific need to crawl existing database table and generate results.
Any online example/tutorial so far only explains about you give documents and it gets indexed, but not any indication of how to do same on database.
Can anyone please explain steps how to achieve this ?
Links like this wiki shows everything with jdbc driver and mysql so I even doubt if Solr supports this with .NET or not. My tech boundries are in C# and Postgresql
You have stumpled over the included support for JDBC already, but you have to use the postgres JDBC driver. The example will be identical with the MySQL one, but you'll have to use the proper URL for postgres instead and reference the JDBC driver (which will depend on which Postgres JDBC driver you use).
jdbc:postgresql://localhost/test
This is a configuration option in Solr, and isn't related to .NET or other external dependencies.
However, the other option is to write the indexing code yourself, and this can often be a good solution as it makes it easier to pre-process the content and apply certain logic before storing content in Solr. For .NET you have Solrnet, a Solr client, that'll make it easy to both query from and submit documents to Solr.

Connecting Neo4j and Matlab

Would anyone happen to know, if there is any good way of operating Neo4j (or any other graph database) from Matlab?
R environment seems to have RNeo4j and I'm surprised that I didn't find any equivalent.
Thanks!
Just few ideas:
You can use the Cypher REST interface of Neo4J and this Matlab JSON plugin (found by this SO answer)
Or you can use the the Matlab JDBC connection (like SQLite use OTHER as vendor string) and the Neo4J JDBC driver
Matlba offers also an ODBC Native connection system but unfortunately Neo4J doesn't - even if there are some experiments going on...

Is there a way to persist HSQLDB data?

We have all of our unit tests written so that they create and populate tables in HSQL. I want the developers who use this to be able to write queries against this HSQL DB ( 1) by writing queries they can better understand the data model and the ones not as familiar with SQL can play with the data before writing the runtime statements and 2) since they don't have access to the test DB/security reasons). Is there a way to persist the results of the test data so that it may be examine and analyzed with a an sql client?
Right now I am jury rigging it by switching the data source to a different DB (like DB2/mysql, then connecting to that DB on my machine so I can play with persistant data), however it would be easier for me if HSQL supports persisting this than to explain how to do this to every new developer.
Just to be clear, I need an SQL client to interact with persistent data, so debugging and checking memory won't be clean. This has more to do with initial development and not debugging/maintenance/testing.
If you use an HSQLDB Server instance for your tests, the data will survive the test run.
If the server uses a jdbc:hsqldb:mem:aname (all-in-memory) url for its database, then the data will be available while the server is running. Alternatively the server can use a jdbc:hsqldb:file:filepath url and the data is persisted to files.
The latest HSQLDB docs explain the different options. Most of the observations also apply to older (1.8.x) versions. However, the latest version 2.0.1 supports starting a server and creating databases dynamically upon the first connection, which can simplify testing a lot.
http://hsqldb.org/doc/2.0/guide/deployment-chapt.html#N13C3D