I have been snooping around trying to find some release notes for Lucene.Net 3.0 and so far have been unsuccessful. Currently we use Lucene.Net 2.9, but its a memory hog so I am trying to find out if Lucene.Net 3.0 has improved memory management.
As for my question, what are the major changes in 3.0? Has the memory management been improved?
Check out CHANGES.TXT in the source distribution.
=================== Release 3.0.3 2012-10-05 =====================
Bug
•[LUCENENET-54] - ArgumentOutOfRangeException caused by SF.Snowball.Ext.DanishStemmer
•[LUCENENET-420] - String.StartsWith has culture in it.
•[LUCENENET-423] - QueryParser differences between Java and .NET when parsing range queries involving dates
•[LUCENENET-445] - Lucene.Net.Index.TestIndexWriter.TestFutureCommit() Fails
•[LUCENENET-464] - The Lucene.Net.FastVectorHighligher.dll of the latest release 2.9.4 breaks any ASP.NET application
•[LUCENENET-472] - Operator == on Parameter does not check for null arguments
•[LUCENENET-473] - Fix linefeeds in more than 600 files
•[LUCENENET-474] - Missing License Headers in trunk after 3.0.3 merge
•[LUCENENET-475] - DanishStemmer doesn't work.
•[LUCENENET-476] - ScoreDocs in TopDocs is ambiguos when using Visual Basic .Net
•[LUCENENET-477] - NullReferenceException in ThreadLocal when Lucene.Net compiled for .Net 2.0
•[LUCENENET-478] - Parts of QueryParser are outdated or weren't previously ported correctly
•[LUCENENET-479] - QueryParser.SetEnablePositionIncrements(false) doesn't work
•[LUCENENET-483] - Spatial Search skipping records when one location is close to origin, another one is away and radius is wider
•[LUCENENET-484] - Some possibly major tests intermittently fail
•[LUCENENET-485] - IndexOutOfRangeException in FrenchStemmer
•[LUCENENET-490] - QueryParser is culture-sensitive
•[LUCENENET-493] - Make lucene.net culture insensitive (like the java version)
•[LUCENENET-494] - Port error in FieldCacheRangeFilter
•[LUCENENET-495] - Use of DateTime.Now causes huge amount of System.Globalization.DaylightTime object allocations
•[LUCENENET-500] - Lucene fails to run in medium trust ASP.NET Application
Improvement
•[LUCENENET-179] - SnowballFilter speed improvment
•[LUCENENET-407] - Signing the assembly
•[LUCENENET-408] - Mark assembly as CLS compliant; make AlreadyClosedException serializable
•[LUCENENET-466] - optimisation for the GermanStemmer.vb
•[LUCENENET-504] - FastVectorHighlighter - support for prefix query
•[LUCENENET-506] - FastVectorHighlighter should use Query.ExtractTerms as fallback
New Feature
•[LUCENENET-463] - Would like to be able to use a SimpleSpanFragmenter for extrcting whole sentances
•[LUCENENET-481] - Port Contrib.MemoryIndex
Task
•[LUCENENET-446] - Make Lucene.Net CLS Compliant
•[LUCENENET-471] - Remove Package.html and Overview.html artifacts
•[LUCENENET-480] - Investigate what needs to happen to make both .NET 3.5 and 4.0 builds possible
•[LUCENENET-487] - Remove Obsolete Members, Fields that are marked as obsolete and to be removed in 3.0
•[LUCENENET-503] - Update binary names
Sub-task
•[LUCENENET-468] - Implement the Dispose pattern properly in classes with Close
•[LUCENENET-470] - Change Getxxx() and Setxxx() methods to .NET Properties
Related
(also posted here: https://github.com/ehcache/ehcache3/issues/3129 )
I'm trying to upgrade from 2 to 3 and the (large) codebase contains:
net.sf.ehcache.hibernate.SingletonEhCacheProvider
inside xml-based bean containers:
<prop key="hibernate.cache.provider_class">net.sf.ehcache.hibernate.SingletonEhCacheProvider</prop>
and I don't see in the migration guide any hints on what is an acceptable way to achieve this:
https://www.ehcache.org/documentation/3.3/migration-guide.html
I'm using Spring 3.2.18 and hibernate as low as 3.3:
./WEB-INF/lib/hibernate-3.2.3.ga.jar
./WEB-INF/lib/hibernate-annotations-3.3.0.ga.jar
./WEB-INF/lib/hibernate-commons-annotations-4.0.1.Final.jar
./WEB-INF/lib/hibernate-validator-5.1.3.Final.jar
./WEB-INF/lib/spring-aop-3.2.18.RELEASE.jar
./WEB-INF/lib/spring-beans-3.2.18.RELEASE.jar
./WEB-INF/lib/spring-context-3.2.18.RELEASE.jar
./WEB-INF/lib/spring-context-support-3.2.18.RELEASE.jar
./WEB-INF/lib/spring-core-3.2.18.RELEASE.jar
./WEB-INF/lib/spring-expression-3.2.18.RELEASE.jar
./WEB-INF/lib/spring-jdbc-3.2.18.RELEASE.jar
./WEB-INF/lib/spring-jms-3.0.3.RELEASE.jar
./WEB-INF/lib/spring-orm-3.2.18.RELEASE.jar
./WEB-INF/lib/spring-oxm-3.2.18.RELEASE.jar
./WEB-INF/lib/spring-security-config-3.1.2.RELEASE.jar
./WEB-INF/lib/spring-security-core-3.2.10.RELEASE.jar
./WEB-INF/lib/spring-security-saml2-core-1.0.0.RC2.jar
./WEB-INF/lib/spring-security-web-3.2.10.RELEASE.jar
./WEB-INF/lib/spring-test-3.2.18.RELEASE.jar
./WEB-INF/lib/spring-tx-3.2.18.RELEASE.jar
./WEB-INF/lib/spring-web-3.2.18.RELEASE.jar
./WEB-INF/lib/spring-webmvc-3.2.18.RELEASE.jar
./WEB-INF/lib/spring-ws-core-2.1.4.RELEASE-all.jar
What is the easiest way to use ehcache 3 with code that currently uses net.sf.ehcache.hibernate.SingletonEhCacheProvider?
Is there a compatibility matrix? I see lots of search results about Hibernate 4+, Spring/Spring Boot at higher versions than my code has.
(note: this is legacy code not written by me :) And we do have plans to modernize but there's a more immediate security concern with ehcache 2 that I need to address)
SAP Crystal Reports runtime engine for .NET Framework (64-bit) version 13.0.30.3805
Tried again with SP32 - 13.0.32.4286 - No Joy
See also links:
https://origin-az.softwaredownloads.sap.com/public/file/0020000000661582022 for SP32 of the Visual Studio software (VS2022)
and https://origin-az.softwaredownloads.sap.com/public/file/0020000000661872022 for the SP32 Runtime - Both 64 bit
We have integrated Crystal Reports into our application for years. Our application is very flexible in what it can connect to. We connect to SQL Server databases either On Premises or on Azure. We are able to support OLEDB Providers OLEDBSQL, MSOLEDBSQL, SQLNCLI and the latest and greatest, MSOLEDBSQL19
Our application connects fine to an Azure database with a connection string like:
Provider=MSOLEDBSQL19;Server={servernamehere}.database.windows.net;Database={databasenamehere};Uid={usernamehere};Trust Server Certificate=True;MARS Connection=True;Application Name={name of the application here};
When we run a report through our application we set its LogonProperties as a CrystalDecisions.Shared.NameValuePairs2
We set the following CrystalDecisions.Shared.NamedValuePair2 values:
Provider - MSOLEDBSQL19
Data Source - server as per connection string
Initial Catalog - database name as per connection string
User ID
Integrated Security - "False"
Locale Identifier - "6153" (not sure what this is for?)
Connect Timeout - "15"
General Timeout - "0"
OLE DB Services - "-5"
Use Encryption for Data - "0" (have also tried "1")
Tag with column collation when possible - "0"
Asynchronouse Processing - "0"
MARS Connection - "0" (have also tried "1")
DataTypeCompatibility - "0"
Trust Server Certificate - "0" (Have also tried "1" and setting the Flag2 Value to 1 in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSSQLServer\Client\SNI19.0\GeneralFlags\Flag2)
Application Intent - "READWRITE" (Have also tried "READ")
MultiSubnetFailover - "0"
Use FMTONLY - "0"
As I have MSOLEDBSQL version 18 installed on my machine, I can change the Provider value to "MSOLEDBSQL" and then it works, but I can't assume that that each user machine has that Driver installed. If the user has MSOLEDBSQL19 installed and they do NOT have MSOLEDBSQL installed, it should still work.
Am I missing something obvious?
I am writing an application (using .NET Framework 4.5.2 + SQL Server 2014 installed locally). The application needs to support both SQL Server 2014 and previous versions.
When reading data using the inbuilt SQLCLR-types (SqlGeometry, SqlGeography, SqlHierarchyID), the standard ADO.NET methods (e.g. DataReader.GetValues()) use the 10.0.0.0 assembly, and throw an exception due to a mismatch with the loaded (v11 or v12) version.
The reasoning is documented (though it takes a while to spot) in the Breaking Changes in SQL Server 2012 (for the 11.0.0.0 assembly). For SQL Server 2012, there are three workarounds listed:
Use Type System Version=SQL Server 2012 in the SQLConnection.ConnectionString
OR: Use app.config / runtime / assemblyBinding / dependentAssembly to re-map v10.0.0.0 to v11.0.0.0
OR (not a very "neat" way to handle it): rewrite your own code to manually deserialize from a SqlBytes instance...
When developing from a computer with SQL Server 2014 installed, the assembly version is v12.0.0.0, and similar issues arise:
System.InvalidCastException: Unable to cast object of type Microsoft.SqlServer.Types.SqlGeometry to type Microsoft.SqlServer.Types.SqlGeometry.
For SQL Server 2014 (other than the horrible manual deserialize approach), there only seems to be one workaround (not officially documented in the breaking-changes) - it would appear that the v4.5 SqlConnection hasn't yet caught up with the version of SQL Server:
Use app.config / runtime / assemblyBinding / dependentAssembly to re-map v10.0.0.0 to v12.0.0.0
Question: other than re-mapping v10.0.0.0 to v12.0.0.0 in app.config (which seems to work), is there any other (easier) approach that will use the referenced assembly version?
A quick code-example below shows the failure (without the assembly-remapping in place):
private static void DoStuff()
{
SqlGeography geog_val = SqlGeography.STGeomFromText(new SqlChars("POLYGON((-122.358 47.653, -122.348 47.649, -122.348 47.658, -122.358 47.658, -122.358 47.653))"), 4326);
SqlGeometry geom_val = SqlGeometry.Parse("LINESTRING(1 1,2 3,4 8, -6 3)");
prm_geog.Value = DBNull.Value; prm_geom.Value = geom_val; ReadReturnedSpatialColumns(cmd);
prm_geog.Value = geog_val; prm_geom.Value = DBNull.Value; ReadReturnedSpatialColumns(cmd);
}
private static void ReadReturnedSpatialColumns(SqlCommand cmd)
{
using (var dr = cmd.ExecuteReader(CommandBehavior.SingleRow))
{
dr.Read(); var items = new object[2]; dr.GetValues(items);
var geog_test = dr.IsDBNull(0) ? SqlGeography.Null : (SqlGeography)items[0];
var geom_test = dr.IsDBNull(1) ? SqlGeometry.Null : (SqlGeometry)items[1];
}
}
This issue still exists with Framework 4.6.1 and there appears to be no workaround apart from the 3 you've already discovered. So the short answer to your question is no.
However I would question if you really need version 12 of the spatial types, because (as far as I can tell) they don't add anything over the v11 types. If you'd prefer to use the v11 types so you can use the Type System Version=SQL Server 2012 workaround, you can install the Nuget package that incorporates all three versions (10, 11, 12) - it's specifically designed to allow you to deploy to servers where MSSQL may not be installed.
As a bonus, referencing that package directly and using Type System Version=SQL Server 2012 will ensure that your app will always be using the 2012 spatial types, so upgrading to SQL 2016 won't break anything if it decides to return a different version of them (e.g. 13, or 14, or whatever 2016 will use) by default.
Starting on 1/13, our Adobe CQ6.0 SP1 error logs started filling up with:
GET /bin/wcm/contentfinder/product/view.json/etc/commerce/products HTTP/1.1] org.apache.jackrabbit.oak.plugins.index.property.strategy.ContentMirrorStoreStrategy Traversed 1041307000 nodes using index jcr:lastModified with filter Filter(query=select [jcr:path], [jcr:score], * from [nt:base] as a where isdescendantnode(a, '/etc/commerce/products') order by [jcr:lastModified] desc /* xpath: /jcr:root/etc/commerce/products//* order by #jcr:lastModified descending /, path=/etc/commerce/products//)
The error logs are huge and AEM 6.0 ran out of disk space:
error.log.2015-01-13: 30295763555 bytes
error.log.2015-01-14: 52886323200 bytes
We are able to reproduce the problem by issuing the following HTTP request against AEM Author:
GET /bin/wcm/contentfinder/product/view.json/etc/commerce/products
This issue suddenly on 1/13/2015, 9:47 a.m., with a co-worker loading a site in AEM 6.0, and ContentFinder never loaded, so she removed cf#, and then was able to proceed with the authoring of the content itself.
We are interested in knowing if others have had similar issues with ContentFinder in AEM6.0.
AEM 6.0 has a bug in the Querybuilder related to Oak 1.0.5. We need Oak to be upgraded to v1.0.9. The following URI has more information:
http://helpx.adobe.com/experience-manager/kb/aem6-available-hotfixes.html
SP1 needs to be installed first and then the hot fixes need to be installed in the given order over SP1. The two sample index packages (damLucene.zip and productsIndex.zip) need to be installed as well. These add the following indices:
/oak:index/damLucene
/etc/commerce/products/ntbaseProductsLucene
I'm using Postgresql 8.4 and my application is trying to connect to the database.
I've registered the driver:
DriverManager.registerDriver(new org.postgresql.Driver());
and then trying the connection:
db = DriverManager.getConnection(database_url);
(btw, my jdbc string is something like: jdbc:postgresql://localhost:5432/myschema?user=myuser&password=mypassword)
I've tried various version of the jdbc driver and getting two type of errors:
with jdbc3:
Exception in thread "main" java.lang.AbstractMethodError: org.postgresql.jdbc3.Jdbc3Connection.getSchema()Ljava/lang/String;
with jdbc4:
java.sql.SQLFeatureNotSupportedException: Il metodo ½org.postgresql.jdbc4.Jdbc4Connection.getSchema()╗ non Þ stato ancora implementato.
that means: method org.postgresql.jdbc4.Jdbc4Connection.getSchema() not implemented yet.
I'm missing something but I don't know what..
------ SOLVED ---------
The problem were not in the connection String or the Driver version, the problem were in the code directly above the getConnection() method:
db = DriverManager.getConnection(database_url);
LOGGER.info("Connected to : " + db.getCatalog() + " - " + db.getSchema());
It seems postgresql driver doesn't have getSchema method, as the java console were often trying to say to me..
The Connection.getSchema() version was added in Java 7 / JDBC 4.1. This means that it is not necessarily available in a JDBC 3 or 4 driver (although if an implementation exists, it will get called).
If you use a JDBC 3 (Java 4/5) driver or a JDBC 4 (Java 6) driver in Java 7 or higher it is entirely possible that you receive a java.lang.AbstractMethodError when calling getSchema if it does not exist in the implementation. Java provides a form of forward compatibility for classes implementing an interface.
If new methods are added to an interface, classes that do not have these methods and were - for example - compiled against an older version of the interface, can still be loaded and used provided the new methods are not called. Missing methods will be stubbed by code that simply throws an AbstractMethodError. On the other hand: if a method getSchema had been implemented and the signature was compatible that method would now be accessible through the interface, even though the method did not exist in the interface at compile time.
In March 2011, the driver was updated so it could be compiled on Java 7 (JDBC 4.1), this happened by stubbing the new JDBC 4.1 methods with an implementation that throws a java.sql.SQLFeatureNotSupportedException, including the implementation of Connection.getSchema. This code is still in the current PostgreSQL JDBC driver version 9.3-1102. Technically a JDBC-compliant driver is not allowed to throw SQLFeatureNotSupportedException unless the API documentation or JDBC specification explicitly allows it (which it doesn't for getSchema).
However the current code on github does provide an implementation since April this year. You might want to consider compiling your own version, or ask on the pgsql-jdbc mailinglist if there are recent snapshots available (the snapshots link on http://jdbc.postgresql.org/ shows rather old versions).