How do I detect whether a mongodb serializer is already registered? - mongodb

I have created a custom serializer for mongoDB.
I can register it and it works as expected.
However the my application sometimes throws an error because it tries to register the serializer twice.
How do I detect whether a serializer has already been registered and thus stop my application from registering a second time?

If you are using
BsonSerializer.RegisterSerializer(typeof (Type), typeSerializer);
you might get this error "there is already a serializer registered for type". Because you cannot register the same type of serializer 2 times. But you can write your own serializer and this serializer will work before default serializers.
For instance: if you want to use local DateTime instead of Utc which is default.
all you need to do is that writing a class implementing IBsonSerializationProviderand register this provider to BsonSerializer as soon as possible!
here is the sample code.
public class LocalDateTimeSerializationProvider : IBsonSerializationProvider
{
public IBsonSerializer GetSerializer(Type type)
{
return type == typeof(DateTime) ? DateTimeSerializer.LocalInstance : null;
}
}
and to be able to register
BsonSerializer.RegisterSerializationProvider(new LocalDateTimeSerializationProvider());
I hope this helps, you can also read the original documentation in here
this .net driver version of mongodb is 2.4!

TL;DR: Ig you are lazy, use BsonSerializer.LookupSerializer or BsonMemberMap.GetSerializer. To do it right, make sure the registration code is called once and only once.
The best approach to avoid this is to make sure the serializer is registered only once. It's a good idea to have some global startup code that registers anything that is global to the application once, and only once. That includes stuff like dependency injector configuration, tools like automapper and the mongodb driver. If you call this code only once and from a single point in code, you don't need to worry about thread safety, dead locks or similar troubles.
The MongoDB driver configuration settings are thread-safe, but don't assume that this is true for all software packages that you might need to configure. Also, locking can be very expensive performance wise if your code is multi-threaded, for instance in a web-application. Last but not least, that lookup you're doing might not be trivial in the first place, because some methods need to walk an entire inheritance tree.

Related

Profiles In dagger

I am new to dagger and I am searching for how can we implement functionality like spring profiles in dagger-2.x. I want different beans for my devo and prod environments, but I am using dagger framework with Java.
#Provides
#Singleton
public void providesDaggerCoffeeShopClient(Stage stage) {
DaggerCoffeeShop.builder()
.dripCoffeeModule(new DripCoffeeModule())
.qualifier(stage)
.build();
}
Here, I want to skip this bean creation if stage is "Devo". Any help will be appreciated.
Well. I have met this question 2 days ago. And since performed research about this matter. I was looking for a solution that would allow me to be able to run application with different profiles passed as a system property on the application run like:
java -Denv=local-dev-env -jar java-app.jar
The only appropriate solution I was able to find is to follow the oficial documentation testing guide:
https://dagger.dev/dev-guide/testing
and devide my one module into different modules, in particular I had to separate and substitute data base dependency when I want to run my app locally avoiding connection to real DB and executing any command against real DB.
And when I run my app I perform check on system property like:
public boolean isLocalDevEnv() {
return Environments.LOCAL_DEV.envName.equals(System.getProperty("env", Environments.PRODUCTION.envName));
}
and if the system property DOES NOT contain the property I am looking for, then I
create the PRODUCTION instance of my component (that is configured to use production modules):
DaggerMyAppComponent.create()
Which approximately looks like:
#Component(modules = {MyAppModule.class, DaoModule.class})
#Singleton
public interface MyAppComponent {...}
otherwise, I create loca-dev-env version of the component that uses the version of the module that produces mock of Dao that would be creating real connection to real Data Base otherwise:
DaggerMyAppLocalDevEnvComponent.create()
Which approximately looks like:
#Component(modules = {MyAppModule.class, DaoMockModule.class})
#Singleton
public interface MyAppLocalDevEnvComponent {...}
Hope it was clear, so just think of Spring Profiles for dagger 2 from the perspective of system properties and programmatic decision making. This approach definitely requires ALOT of boilerplate code in comparison to Spring's Profiles implementation, but it is the only viable approach I was able to come up with.
Hope it helps.

What is the purpose of MigrateDatabaseToLatestVersion useSuppliedContext = false?

Something I ran into recently.
I have a project which dynamically generates connection strings and I'm trying to use MigrateDatabaseToLatestVersion on the context that wraps these. Every time I would do this I would see my dynamic db not be created, but instead the db on my default constructor connection string (used for testing) migrated over and over.
After digging through the EF migrations source code I find that MigrateDatabaseToLatestVersion has a constructor
// Summary:
// Initializes a new instance of the MigrateDatabaseToLatestVersion class specifying
// whether to use the connection information from the context that triggered initialization
// to perform the migration.
//
// Parameters:
// useSuppliedContext:
// If set to true the initializer is run using the connection information from the
// context that triggered initialization. Otherwise, the connection information
// will be taken from a context constructed using the default constructor or registered
// factory if applicable.
public MigrateDatabaseToLatestVersion(bool useSuppliedContext);
Not being flippant but what is the reason why you would want to ever migrate the context that is not the one that is being migrated? Why is that the default? Does anyone have any insight into the thinking here?
I want to know the answer to this question myself. I do not know why the context was designed that way. However, I can venture a guess as to why the current default is useSuppliedContext=false.
I decompiled the first version of EntityFramework to include migration support, EntityFramework-4.3.0, because I suspect that the default behavior is for backwards compatibility purposes. I looked at the decompiled implementation of IDatabaseInitializer<TContext>.InitializeDatabase(TContext context) in MigrateDatabaseToLatestVersion. Guess what? In EntityFramework-4.3.0, the context parameter of that method is completely ignored. So it can’t possibly respond to explicitly-provided connection parameters/settings because those are only accessible through that context variable.
It looks like support for respecting context was added in EntityFramework-6.1.1. Prior to that, your only option was to pass a connection string to MigrateDatabaseToLatestVersion’s constructor. I think this would have prevented you from using the same DbContext type for different backends in the same process. I bet that the new feature of respecting the context (and behaving correctly, IMO) would not have been accepted into EntityFramework if it was enabled by default because that would change behavior which stable projects may be relying on and otherwise prevent projects from adopting it.
The exact reasoning is actually given as a comment in commit 777a7a77a740c75d1828eb53332ab3d31ebbcfa3 by Rowan Miller:
Also swapping the new useSuppliedContext parameter on MigrateDatabaseToLatestVersion`.cs to be false by default since we are going to be shipping this change in a patch release.

Utilizing RijndaelManaged, Enterprise Library and Autofac together

I'm newly experimenting with the cryptography application block while using Autofac as the container.
As a result, I'm using the nuget package EntLibContrib 5.0 - Autofac Configurator.
With the DPAPI Symmetric Crypto Provider, I was able to encrypt/decrypt data just fine.
However, with RijndaelManaged, I receive an ActivationException:
Microsoft.Practices.ServiceLocation.ActivationException: Activation error occured while trying to get instance of type ISymmetricCryptoProvider, key "RijndaelManaged" ---> Autofac.Core.Registration.ComponentNotRegisteredException: The requested service 'RijndaelManaged (Microsoft.Practices.EnterpriseLibrary.Security.Cryptography.ISymmetricCryptoProvider)' has not been registered. To avoid this exception, either register a component to provide the service, check for service registration using IsRegistered(), or use the ResolveOptional() method to resolve an optional dependency.
Per instructions here: http://msdn.microsoft.com/en-us/library/ff664686(v=pandp.50).aspx
I am trying to inject CryptographyManager into MyService.
My bootstrapping code looks like this:
var builder = new ContainerBuilder();
builder.RegisterEnterpriseLibrary();
builder.RegisterType<MyService>().As<IMyService>();
_container = builder.Build();
var autofacLocator = new AutofacServiceLocator(_container);
EnterpriseLibraryContainer.Current = autofacLocator;
App.config has this info defined for symmetricCryptoProviders:
name: RijndaelManaged
type: Microsoft.Practices.EnterpriseLibrary.Security.Cryptography.HashAlgorithmProvider, Microsoft.Practices.EnterpriseLibrary.Security.Cryptography, Version=5.0.505.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35
algorithmType:System.Security.Cryptography.RijndaelManaged
protectedKeyFilename:[path_to_my_key]
protectedKeyProtectionScope: LocalMachine
Anyone have experience in this combination of technologies?
After some testing, I believe I may go with a Unity container instead, since I have no preference in IOC containers other than whatever I use should integrate nicely with ASP.NET MVC3 and http-hosted WCF services.
My bootstrapping code then becomes more simple:
var container = new UnityContainer()
.AddNewExtension<EnterpriseLibraryCoreExtension>();
container.RegisterType<IMyService, MyService>();
I actually wrote the Autofac EntLib configurator (with some help from some of the P&P folks). It's been tested with the exception handling block and logging block, but I haven't tried it with the cryptography stuff.
EntLib has an interesting thing where it sometimes requires registered services to be named, and I'm guessing from the exception where it says...
type ISymmetricCryptoProvider, key "RijndaelManaged"
...I'm thinking EntLib wants you to register a named service, like:
builder.Register(c =>
{
// create the HashAlgorithmProvider using
// RijndaelManaged algorithm
})
.Named<ISymmetricCryptoProvider>("RijndaelManaged");
I'm sort of guessing at the exact registration since, again, I've not got experience with it or tested it, but the idea is that EntLib is trying to register a named service whereas the actual service isn't getting registered with the name.
The RegisterEnterpriseLibrary extension basically goes through and tries to use the same algorithm that Unity uses to do the named/unnamed registrations. I'm guessing you've encountered an edge case where something's not getting handled right. EntLib is pretty well tied to Unity, even if they did try to abstract it away.
If you're not tied to Autofac, Unity is going to be your lowest-friction path forward. I like the ease of use and more lightweight nature of Autofac, and my apps are tied to it, so I needed everything to work that way; if you don't have such an affinity, might be easier to just use Unity.
Sorry that's not a super answer. EntLib wire-up in IoC is a really complex beast.

Can I use RequestFactory without getId() and getVersion() methods?

We are trying to use RequestFactory with an existing Java entity model. Our Java entities all implement a DomainObject interface and expose a getObjectId() method (this name was chosen as getId() can be ambiguous and conflict with the domain object's actual ID from the domain being modeled.
The ServiceLayerDecorator interface allows for customization of ID and Version property lookup strategies.
public class MyServiceLayerDecorator extends ServiceLayerDecorator {
#Override
public Object getId(Object object) {
DomainObject domainObject = (DomainObject) object;
return domainObject.getObjectId();
}
}
So far, so good. However, trying to deploy this solution yields runtime errors. In particular, RequestFactoryInterfaceValidator complains:
[ERROR] There is no getId() method in type com.mycompany.server.MyEntity
Then later on:
[ERROR] Type type com.mycompany.client.MyEntityProxy was previously marked as bad
[ERROR] The type com.mycompany.client.MyEntityProxy did not pass RequestFactory validation
[ERROR] Unexpected error
com.google.web.bindery.requestfactory.server.UnexpectedException: The type com.mycompany.client.MyEntityProxy did not pass RequestFactory validation
at com.google.web.bindery.requestfactory.server.ServiceLayerDecorator.die(ServiceLayerDecorator.java:212) ~[gwt-servlet.jar:na]
My question is - why does the ServiceLayerDecorator allow for customized ID and Version lookup strategies if RequestFactoryInterfaceValidator is hardcoding the convention of getId() and getVersion()?
I guess I could override ServiceLayerDecorator.resolveClass() to ignore "poisoned" proxy classes but at this point it seems like I'm fighting the framework too much...
Couple of options, some of which have already been mentioned:
Locator. I like to make a single Locator for the entire proj, or at least for groups of related objects that have similar key types. The getId() call will be able to invoke your DomainObject.getObjectId() method and return that value. Note that the getDomainType() method is currently unused, and can return null or throw an exception.
ValueProxy. Instead of having your objects map to something RF can understand as an entity, map them to plain value objects - no id or version required. RF misses out on a lot of clever things it can do, especially with regard to avoiding sending redundant data to the server.
ServiceLayerDecorator. This worked pre 2.4, but with the annotation processing that goes on now, it works less well, since it tries to do some of the work for you. It seems ServiceLayerDecorator has lost a lot of its teeth in the last few months - in theory, you could use it to rebuild getters to talk directly to your persistence mechanism, but now that the annotation processing verifies your code, that is no longer an option.
Big issue in all of this is that RequestFactory is designed to solve a single problem, and solve it well: Allow developers to use POJOs mapped to some persistence mechanism, and refer to those objects from the client, following certain conventions to avoid writing extra code or configuration.
As a result, it solves its own problem pretty well, and ends up being a bad fit for many other problems/use-cases. You might be finding that it isn't worth it: if so, a few thoughts you might consider:
RPC. It isn't perfect for much, but it does an okay job for a lot.
AutoBeans (which RF is based on) is still a pretty fast, lightweight way to send data over the wire and get it into the app. You could build your own wrapper around it, like RF has done, and slim down the problem it is trying to solve to just your use-case.

Accessing Datasource from Outside A Web Container (through JNDI)

I'm trying to access a data source that is defined within a web container (JBoss) from a fat client outside the container.
I've decided to look up the data source through JNDI. Actually, my persistence framework (Ibatis) does this.
When performing queries I always end up getting this error:
java.lang.IllegalAccessException: Method=public abstract java.sql.Connection java.sql.Statement.getConnection() throws java.sql.SQLException does not return Serializable
Stacktrace:
org.jboss.resource.adapter.jdbc.remote.WrapperDataSourceService.doStatementMethod(WrapperDataSourceS
ervice.java:411),
org.jboss.resource.adapter.jdbc.remote.WrapperDataSourceService.invoke(WrapperDataSourceService.java
:223),
sun.reflect.GeneratedMethodAccessor106.invoke(Unknown Source),
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25),
java.lang.reflect.Method.invoke(Method.java:585),
org.jboss.mx.interceptor.ReflectedDispatcher.invoke(ReflectedDispatcher.java:155),
org.jboss.mx.server.Invocation.dispatch(Invocation.java:94),
org.jboss.mx.server.Invocation.invoke(Invocation.java:86),
org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.java:264),
org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:659),
My Datasource:
<?xml version="1.0" encoding="UTF-8"?>
<datasources>
<local-tx-datasource>
<jndi-name>jdbc/xxxxxDS</jndi-name>
<connection-url>jdbc:oracle:thin:#xxxxxxxxx:1521:xxxxxxx</connection-url>
<use-java-context>false</use-java-context>
<driver-class>oracle.jdbc.driver.OracleDriver</driver-class>
<user-name>xxxxxxxx</user-name>
<password>xxxxxx</password>
<exception-sorter-class-name>org.jboss.resource.adapter.jdbc.vendor.OracleExceptionSorter</exception-sorter-class-name>
<min-pool-size>5</min-pool-size>
<max-pool-size>20</max-pool-size>
</local-tx-datasource>
</datasources>
Does anyone have a clue where this could come from?
Maybe someone even knows a better way how to achieve this.
Any hints are much appreciated!
Cheers,
Michael
Not sure if this is the same issue?
JBoss DataSource config
DataSource wrappers are not usable outside of the server VM
#Michael Well, java.sql.Connection is an Interface - it might technically be possible for the concrete implementation you're getting from JBoss to be Serializable - but I don't think you're really going to have any options you can use. If it was possible, it would probably be easy :)
I think #toolkit might have said the right words with useable outside the VM - the JDBC drivers will be talking to native driver code running in the underlying OS I guess, so that might explain why you can't just pass a connection over the network elsewhere.
My advice, (if you don't get any better advice!) would be to find a different approach - if you have access to locate the resource on the JBoss directory, maybe implement a proxy object that you can locate and obtain from the directory that allows you to use the connection remotely from your fat client. That's a design pattern called data transfer object I think Wikipedia entry
#toolkit:
Well, not exactly. Since I can access the data source over JNDI, it is actually visible and thus usable.
Or am I getting something totally wrong?
#Brabster:
I think you're on the right track. Isn't there a way to make the connection serializable? Maybe it's just a configuration issue...
I've read up on Ibatis now - maybe you can make your implementations of Dao etc. Serializable, post them into your directory and so retrieve them and use them in your fat client? You'd get reuse benefits out of that too.
Here's an example of something looks similar for Wicket
JBoss wraps up all DataSources with it's own ones.
That lets it play tricks with autocommit to get the specified J2EE behaviour out of a JDBC connection. They are mostly serailizable. But you needn't trust them.
I'd look carefully at it's wrappers. I've written a surrogate for JBoss's J2EE wrappers wrapper for JDBC that works with OOCJNDI to get my DAO code unit test-able standalone.
You just wrap java.sql.Driver, point OOCJNDI at your class, and run in JUnit.
The Driver wrapper can just directly create a SQL Driver and delegate to it.
Return a java.sql.Connection wrapper of your own devising on Connect.
A ConnectionWrapper can just wrap the Connection your Oracle driver gives you,
and all it does special is set Autocommit true.
Don't forget Eclipse can wrt delgates for you. Add a member you need to delegate to , then select it and right click, source -=>add delgage methods.
This is great when you get paid by the line ;-)
Bada-bing, Bada-boom, JUnit out of the box J2EE testing.
Your problem is probably amenable to the same thing, with JUnit crossed out and FatCLient written in an crayon.
My FatClient uses RMI generated with xdoclet to talk to the J2EE server, so I don't have your problem.
I think the exception indicates that the SQLConnection object you're trying to retrieve doesn't implement the Serializable interface, so it can't be passed to you the way you asked for it.
From the limited work I've done with JDNI, if you're asking for an object via JNDI it must be serializable. As far as I know, there's no way round that - if I think of a better way I'll post it up...
OK, one obvious option is to provide a serializable object local to the datasource that uses it but doesn't have the datasource as part of its serializable object graph. The fat client could then look up that object and query it instead.
Or create a (web?) service through which to access the datasource is governed - again your fat client would hit the service - this would probably be better encapsulated and more reuseable approach if those are concerns for you.