I have been working to setup Ormlite as the primary data access layer between a PostgreSQL database and Java application. Everything has been fairly straightforward, until I started messing with PostgreSQL's array types. In my case, I have two tables that make use of text[] array type. Following the documentation, I created a custom data persister as below:
public class StringArrayPersister extends StringType {
private static final StringArrayPersister singleTon = new StringArrayPersister();
private StringArrayPersister() {
super(SqlType.STRING, new Class<?>[]{String[].class});
}
public static StringArrayPersister getSingleton() {
return singleTon;
}
#Override
public Object javaToSqlArg(FieldType fieldType, Object javaObject) {
String[] array = (String[]) javaObject;
if (array == null) {
return null;
} else {
String join = "";
for (String str : array) {
join += str +",";
}
return "'{" + join.substring(0,join.length() - 1) + "}'";
}
}
#Override
public Object sqlArgToJava(FieldType fieldType, Object sqlArg, int columnPos) {
String string = (String) sqlArg;
if (string == null) {
return null;
} else {
return string.replaceAll("[{}]","").split(",");
}
}
}
And then in my business object implementation, I set up the persister class on the column likeso:
#DatabaseField(columnName = TAGS_FIELD, persisterClass = StringArrayPersister.class)
private String[] tags;
When ever I try inserting a new record with the Dao.create statement, I get an error message saying tags is of type text[], but got character varying... However, when querying existing records from the database, the business object (and text array) load just fine.
Any ideas?
UPDATE:
PostGresSQL 9.2. The exact error message:
Caused by: org.postgresql.util.PSQLException: ERROR: column "tags" is
of type text[] but expression is of type character varying Hint: You
will need to rewrite or cast the expression.
I've not used ormlite before (I generally use MyBatis), however, I believe the proximal issue is this code:
private StringArrayPersister() {
super(SqlType.STRING, new Class<?>[]{String[].class});
}
SqlType.String is mapped to varchar in SQL in the ormlite code, and so therefore I believe is the proximal cause of the error you're getting. See ormlite SQL Data Types info for more detail on that.
Try changing it to this:
private StringArrayPersister() {
super(SqlType.OTHER, new Class<?>[]{String[].class});
}
There may be other tweaks necessary as well to get it fully up and running, but that should get you passed this particular error with the varchar type mismatch.
Related
I am attempting to accept from the browser a List and use this within a SQL query to a postgres database. I have the following code snippet that tries to show the function that I have made todo this. Some of the variables have been changed in case there appears to be discrepancies.
static public List<Map<String,Object>> fetch(NamedParameterJdbcTemplate jdbcTemplate, List<Integer> id){
List<Map<String,Object>> result= new ArrayList<>();
String sql = "select * from lookup where id && ARRAY[ :ids ]";
MapSqlParameterSource parameters = new MapSqlParameterSource();
parameters.addValue("ids",id, Types.INTEGER);
result= jdbcTemplate.query(sql,
parameters,
new RowMapper<Map<String,Object>>() { ...
}
)
}
The lookup tables id field is a postgress array hence me needing to use && and the array function
This function is called by many different endpoints and passes the NamedParameterJdbcTemplate as well as a list of Integers. The problem I am having is that if any integer in the list is < 100 I get the following message
Bad value for type int : {20}
Is there another way of doing this or a way around this error ?
EDIT:
It appears it was part of the problem mentioned as the answer but also using
rs.getInt(col)
instead of
rs.getArray(col)
There's an error I can see in the SQL, and probably the wrong choice of API after that. First in the query:
select * from lookup where id && ARRAY[ :ids ]
To bind an array parameter, it must not be placed in the ARRAY constructor, but rather you need to use JDBC binding like this:
select * from lookup where id && ?
As you've noticed I'm not using a named parameter in these examples, because NamedParameterJdbcTemplate does not provide a route to obtaining the java.sql.Connection object or a proxy to it. You can access it through the PreparedStatementSetter if you use the JdbcOperations interface instead.
public static List<Map<String,Object>> fetch(NamedParameterJdbcTemplate jdbcTemplate, List<Integer> idlist){
List<Map<String,Object>> result= new ArrayList<>();
String sql = "select * from lookup where id && ?";
final Integer[] ids = idlist.toArray(new Integer[0]);
PreparedStatementSetter parameters = new PreparedStatementSetter() {
#Override
void setValues(PreparedStatement stmt) {
Connection conn = stmt.getConnection();
// this can only be done through the Connection
java.sql.Array arr = conn.createArrayOf("integer", ids);
// you can use setObject(1, ids, java.sql.Types.ARRAY) instead of setArray
// in case the connection wrapper doesn't pass it on to the JDBC driver
stmt.setArray(1, ids);
}
};
JdbcOperations jdo = jdbcTemplate.getJdbcOperations();
result= jdo.query(sql,
parameters,
new RowMapper<Map<String,Object>>() { ...
}
)
}
There might be errors in the code, since I normally use a different set of APIs, and you need a try-catch block for java.sql.SQLException in that setValues function, but you should be able to handle it from here on.
We have a n tier application where we are reading the BLOB objects stored in the postgres database.
At times when we are trying to access the blob object through input stream we get "org.postgresql.util.PSQLException: ERROR: invalid large-object descriptor: 0" after reading the other blogs, this exception comes whenever we are trying to access the BLOB outside the transaction (transaction is committed).
But, in our case we get this exception even though the transaction is active. The BLOB is read within the transaction.
Any pointers as to why this exception is occuring even though the transaction is active?
Your description of the problem does not have specifics but in my code this error showed up when I tried to use Large Object outside the data access method. As in your case, the object was formed in the method. This is consistent with what other people noticed in this forum: Large Object exists only within the data access method (or transaction). I needed byte[] so I converted Large Object within the method, wrapped it up in a Data Transfer Object and was able to use it in other layers. This are relevant code snippets:
//This is Data Access Class
#Named
public class SupportDocsDAO {
protected ResultSet resultSet;
private LargeObject lob;
// SupportDocs is an Entity class in Data Transfer Objects package
private SupportDocs supportDocsDTO;
public LargeObject getLob() {
return lob;
}
public void setLob(LargeObject lob) {
this.lob = lob;
}
public SupportDocs getSupportDocsDTO() {
return supportDocsDTO;
}
public void setSupportDocsDTO(SupportDocs supportDocsDTO) {
this.supportDocsDTO = supportDocsDTO;
}
//.... other code
public SupportDocs fetchSupportDocForDescr(SupportDocs supportDocs1) {
Session session = HibernateUtil.getSessionFactory().openSession();
session.doWork(new Work() {
#Override
public void execute(java.sql.Connection connection) throws SQLException {
java.sql.PreparedStatement ps = null;
try {
LargeObjectManager lobm =
connection.unwrap(org.postgresql.PGConnection.class).getLargeObjectAPI();
ps = connection.prepareCall("{call ret_lo_supportdocs_id(?)}");
ps.setInt(1, supportDocs1.getSuppDocId());
ps.execute();
resultSet = ps.getResultSet();
while (resultSet.next()) {
supportDocsDTO.setFileNameDoc(resultSet.getString("filenamedoc"));
supportDocsDTO.setExtensionSd(resultSet.getString("extensionsd"));
long oid = resultSet.getLong("suppdoc_oid");
setLob(lobm.open(oid, LargeObjectManager.READ));
//This is the conversion of Large Object into byte[]
supportDocsDTO.setSuppDocImage(lob.read(lob.size()));
System.out.println("object size: " + lob.size());
}
// other code, catch, cleanup with finally, and return supportDocsDTO
This works without problems. I can recreate images and videos from obtained byte[].
I'm using Filehelpers to parse a very wide, fixed format file and want to be able to take the resulting object and load it into a DB using EF. I'm getting a missing key error when I try to load the object into the DB and when I try and add an Id I get a Filehelpers error. So it seems like either fix breaks the other. I know I can map a Filehelpers object to a POCO object and load that but I'm dealing with dozens (sometimes hundreds of columns) so I would rather not have to go through that hassle.
I'm also open to other suggestions for parsing a fixed width file and loading the results into a DB. One option of course is to use an ETL tool but I'd rather do this in code.
Thanks!
This is the FileHelpers class:
public class AccountBalanceDetail
{
[FieldHidden]
public int Id; // Added to try and get EF to work
[FieldFixedLength(1)]
public string RecordNuber;
[FieldFixedLength(3)]
public string Branch;
// Additional fields below
}
And this is the method that's processing the file:
public static bool ProcessFile()
{
var dir = Properties.Settings.Default.DataDirectory;
var engine = new MultiRecordEngine(typeof(AccountBalanceHeader), typeof(AccountBalanceDetail), typeof(AccountBalanceTrailer));
engine.RecordSelector = new RecordTypeSelector(CustomSelector);
var fileName = dir + "\\MOCK_ACCTBAL_L1500.txt";
var res = engine.ReadFile(fileName);
foreach (var rec in res)
{
var type = rec.GetType();
if (type.Name == "AccountBalanceHeader") continue;
if (type.Name == "AccountBalanceTrailer") continue;
var data = rec as AccountBalanceDetail; // Throws an error if AccountBalanceDetail.Id has a getter and setter
using (var ctx = new ApplicationDbContext())
{
// Throws an error if there is no valid Id on AccountBalanceDetail
// EntityType 'AccountBalanceDetail' has no key defined. Define the key for this EntityType.
ctx.AccountBalanceDetails.Add(data);
ctx.SaveChanges();
}
//Console.WriteLine(rec.ToString());
}
return true;
}
Entity Framework needs the key to be a property, not a field, so you could try declaring it instead as:
public int Id {get; set;}
I suspect FileHelpers might well be confused by the autogenerated backing field, so you might need to do it long form in order to be able to mark the backing field with the [FieldHidden] attribute, i.e.,
[FieldHidden]
private int _Id;
public int Id
{
get { return _Id; }
set { _Id = value; }
}
However, you are trying to use the same class for two unrelated purposes and this is generally bad design. On the one hand AccountBalanceDetail is the spec for the import format. On the other you are also trying to use it to describe the Entity. Instead you should create separate classes and map between the two with a LINQ function or a library like AutoMapper.
I have a List getter method that I want to index (tokenized) into a number of fields.
I have a FieldBridge implementation that iterates over the list and indexes each string into a field with the index appended to the field name to give a different name for each.
I have two different Analyzer implementations (CaseSensitiveNGramAnalyzer and CaseInsensitiveNGramAnalyzer) that I want to use with this FieldBridge (to make a case-sensitive and a case-insensitive index of the field).
This is the FieldBridge I want to apply the Analyzers to:
public class StringListBridge implements FieldBridge
{
#Override
public void set(String name, Object value, Document luceneDocument, LuceneOptions luceneOptions)
{
List<String> strings = (List<String>) value;
for (int i = 0; i < strings.size(); i++)
{
addStringField(name + 1, strings.get(i), luceneDocument, luceneOptions);
}
}
private void addStringField(String fieldName, String fieldValue, Document luceneDocument, LuceneOptions luceneOptions)
{
Field field = new Field(fieldName, fieldValue, luceneOptions.getStore(), luceneOptions.getIndex(), luceneOptions.getTermVector());
field.setBoost(luceneOptions.getBoost());
luceneDocument.add(field);
}
}
Is it possible to apply an Analyzer to a field that uses a FieldBridge?
If so, can this be done with annotations, or does it have to be done programatically?
If the latter, can I inject the Analyzer as a parameter?
I am thinking along the lines of the following, but am not at all familiar with field token streams etc.:
private void addStringField(String fieldName, String fieldValue, Document luceneDocument, LuceneOptions luceneOptions)
{
Field field = new Field(fieldName, fieldValue, luceneOptions.getStore(), luceneOptions.getIndex(), luceneOptions.getTermVector());
field.setBoost(luceneOptions.getBoost());
try
{
field.setTokenStream(new CaseSensitiveNGramAnalyzer().reusableTokenStream(fieldName, new StringReader(fieldValue)));
}
catch (IOException e)
{
e.printStackTrace();
}
luceneDocument.add(field);
}
Is this a sane approach?
EDIT I have tried specifying the Analyzer and FieldBridge within a #Field annotation (without including the above analyzer code) as follows, but it appears to be using the default analyzer rather than those specified with analyzer = .
#Fields({
#Field(name="content-nocase",
index = Index.TOKENIZED,
analyzer = #Analyzer(impl = CaseInsensitiveNgramAnalyzer.class),
bridge = #FieldBridge(impl = StringListBridge.class)),
#Field(name = "content-case",
index = Index.TOKENIZED,
analyzer = #Analyzer(impl = CaseSensitiveNgramAnalyzer.class),
bridge = #FieldBridge(impl = StringListBridge.class)),
})
public List<String> getContents()
The solution atm is via a custom scoped analyzer or using #AnalyzerDiscriminator together with #AnalyzerDef. This is also discussed on the Hibernate Search forum - https://forum.hibernate.org/viewtopic.php?f=9&t=1016667
I managed to get this working. Hibernate Search appears not to use the specified Analyzer when both analyzer = and bridge = are specified, at least if the specified bridge creates multiple fields.
Manually passing the TokenStream from the desired analyzer to the generated Fields in the bridge got me the expected result:
private void addStringField(String fieldName, String fieldValue, Document luceneDocument, LuceneOptions luceneOptions)
{
Field field = new Field(fieldName, fieldValue, luceneOptions.getStore(), luceneOptions.getIndex(), luceneOptions.getTermVector());
field.setBoost(luceneOptions.getBoost());
// manually apply token stream from analyzer, as hibernate search does not
// apply the specified analyzer properly
try
{
field.setTokenStream(analyzer.reusableTokenStream(fieldName, new StringReader(fieldValue)));
}
catch (IOException e)
{
e.printStackTrace();
}
luceneDocument.add(field);
}
ParameterizedBridge is implemented to specify which analyzer to use (analyzer is instantiated and stored in a field before this method is called).
In my domain, there's no important distinction between NULL and an empty string. How do I get EF to ignore the difference between the two and always persist an empty string as NULL?
Empty string is not default value for string property so it means your code is setting empty strings somewhere. In such case it is your responsibility to handle it.
If you are using code first with POCOs you can use custom setter:
private string _myProperty;
public string MyProperty
{
get { return _myProperty; }
set
{
if (value == String.Empty)
{
_myProperty = null;
}
else
{
_myProperty = value;
}
}
}
Here is a function I placed in my DbContext subclass that replaces empty or whitespace strings with null.
I still didn't optimize it so any performance hints will be very appreciated.
private const string StringType = "String";
private const EntityState SavingState = EntityState.Added | EntityState.Modified;
public override int SaveChanges()
{
var objectContext = ((IObjectContextAdapter)this).ObjectContext;
var savingEntries =
objectContext.ObjectStateManager.GetObjectStateEntries(SavingState);
foreach (var entry in savingEntries)
{
var curValues = entry.CurrentValues;
var fieldMetadata = curValues.DataRecordInfo.FieldMetadata;
var stringFields = fieldMetadata.Where(f =>
f.FieldType.TypeUsage.EdmType.Name == StringType);
foreach (var stringField in stringFields)
{
var ordinal = stringField.Ordinal;
var curValue = curValues[ordinal] as string;
if (curValue != null && curValue.All(char.IsWhiteSpace))
curValues.SetValue(ordinal, null);
}
}
return base.SaveChanges();
}
Optimization considerations:
Identify a string type property by different way other than string comparison I tried to look-up some enumeration of the built-in types but didn't find
Cache string fields for types (maybe is unnecessary, will have to decompile and see what the original impl does
Order result by entity type, backup iterated entity type, if next iterated entity is same type, use previous metadata, again, if the metadata is anyway there, performance is cheaper the way it is
Limit string length for whitespace check - i.e. if a string length > x, skip checking whether its a whitespace string or not
I'm using Silverlight and the TextBoxes in the UI set all the string properties to empty strings.
I tried setting:
<TextBox
Text="{Binding MyStringProperty,
Mode=TwoWay,
ValidatesOnDataErrors=True,
TargetNullValue=''}"/>
But it didn't help much.
That's not Entity Framework's job.
You should do it in your repository, or in the database with triggers.
Or do it at the start (e.g when the data comes in, UI, external source, etc)