I want to use a custom ContractConverter to add custom fields to my contracts.
But the spring cloud contract plugin does not takes the CustomContractConverter class I have created to process the contracts.
I have created a demo project in github to demonstrate this:
https://github.com/javiersvg/custom-contracts
Code structure:
- src
- test
- resource
- contracts
- helath.yml -> A contract with a custom field.
- META-INF
- spring.factories -> Custom contract converter configuration
- groovy/com/.../
- CustomContractConverter.groovy -> Custom contract converter
How to run:
There is a docker-compose.yml file which copies the content of these folders into the springcloud/spring-cloud-contract:2.2.0.BUILD-SNAPSHOT docker image and runs the
generateContractTests task to create the tests based on the contracts.
Expected result:
An exception is thrown when the CustomContractConverter is called.
Actual result:
The process finishes with an exception due to the default YamlContracConverter class not being able to interprete the custom field in the contract.
What I have discovered so far:
Debugging the spring contract gradle plugin within the docker container (by adding the -Dorg.gradle.debug=true parameter to the docker-compose.yml command field) I see that the class SpringFactoriesLoader(Line: 132) which should load the spring.factories file is only loading files with that name from the jars defined as dependencies and not the one I added in the source code.
This is done through the VisitableUrlClassLoader which has the reference to the dependency jars and not the source code.
This is provably caused because the plugin does not load the source code until after it creates the contracts but this is only a theory.
Any experience with custom contract converters will be really appreciated.
You have to build your own image with that file being on the classpath. Your groovy file needs to be compiled. We don't read scripts at runtime to retrieve additional converters
Related
I am trying to run my first Scio pipeline on Dataflow .
The code in question can be found here. However I do not think that is too important.
My first experiment was to read some local CSV files and write another local CSV file, using the DirecRunner. That worked as expected.
Now, I am trying to read the files from GCS, write the output to BigQuery and run the pipeline using the DataflowRunner. I already made all the necessary changes (or that is what I believe). But I am unable to make it run.
I already gcloud auth application-default login and when I do
sbt run --runner=DataflowRunner --project=project-id --input-path=gs://path/to/data --output-table=dataset.table
I can see the Jb is submitted in Dataflow. However, after one hour the jobs fails with the following error message.
Workflow failed. Causes: The Dataflow job appears to be stuck because no worker activity has been seen in the last 1h.
(Note, the job did nothing in all that time, and since this is an experiment the data is simple too small to take more than a couple of minutes).
Checking the StackDriver I can find the follow error:
java.lang.ClassNotFoundException: scala.collection.Seq
Related to some jackson thing:
java.util.ServiceConfigurationError: com.fasterxml.jackson.databind.Module: Provider com.fasterxml.jackson.module.scala.DefaultScalaModule could not be instantiated
And that is what is killing each executor just at the start. I really do not understand why I can not find the Scala standard library.
I also tried to first create a template and runt it latter with:
sbt run --runner=DataflowRunner --project=project-id --input-path=gs://path/to/data --output-table=dataset.table --stagingLocation=gs://path/to/staging --templateLocation=gs://path/to/templates/template-1
But, after running the template, I get the same error.
Also, I noticed that in the staging folder there are a lot of jars, but the scala-library.jar is not in there.
I am missing something obvious?
It's a known issue with sbt 1.3.0 which introduced some breaking change w.r.t. class loaders. Try 1.2.8?
Also the Jackson issue is probably related to Java 11 or above. Stay with Java 8 for now.
Fix by setting the sbt classLoaderLayeringStrategy:
run / classLoaderLayeringStrategy := ClassLoaderLayeringStrategy.Flat
sbt uses a new classloader for the application that is run with run. This causes other classes already loaded by the JVM (Predef for instance) to be reused, reducing startup time. See in-process classloaders for details.
This doesn't play well with the Beam DataflowRunner because it explicitly does not stage classes from parent classloaders, see PipelineResources.java#L51:
Attempts to detect all the resources the class loader has access to. This does not recurse to class loader parents stopping it from pulling in resources from the system class loader.
So the fix is to force all classes used by your application to be loaded in the same classloader so that DataflowRunner stages everything.
Hope that helps
Using scala playframework 2.5,
I build the app into a jar using sbt plugin PlayScala,
And then build and pushes a docker image out of it using sbt plugin DockerPlugin
Residing in the source code repository conf/development.conf (same where application.conf is).
The last line in application.conf says include development which means that in case development.conf exists, the entries inside of it will override some of the entries in application.conf in such way that provides all default values necessary for making the application runnable locally right out of the box after the source was cloned from source control with zero extra configuration. This technique allows every new developer to slip right in a working application without wasting time on configuration.
The only missing piece to make that architectural design complete is finding a way to exclude development.conf from the final runtime of the app - otherwise this overrides leak into production runtime and obviously the application fails to run.
That can be achieved in various different ways.
One way could be to some how inject logic into the build task (provided as part of the sbt pluging PlayScala I assume) to exclude the file from the jar artifact.
Other way could be injecting logic into the docker image creation process. this logic could manually delete development.conf from the existing jar prior to executing it (assuming that's possible)
If you ever implemented one of the ideas offered,
or maybe some different architectural approach that gives the same "works out of the box" feature, please be kind enough to share :)
I usually have the inverse logic:
I use the application.conf file (that Play uses by default) with all the things needed to run locally. I then have a production.conf file that starts by including the application.conf, and then overrides the necessary stuff.
for deploying to production (or staging) I specify the production/staging.conf file to be used
This is how I solved it eventually.
conf/application.conf is production ready configuration, it contains placeholders for environment variables whom values will be injected in runtime by k8s given the service's deployment.yaml file.
right next to it, conf/development.conf - its first line is include application.conf and the rest of it are overrides which will make the application run out of the box right after git clone by a simple sbt run
What makes the above work, is the addition of the following to build.sbt :
PlayKeys.devSettings := Seq(
"config.resource" -> "development.conf"
)
Works like a charm :)
This can be done via the mappings config key of sbt-native-packager:
mappings in Universal ~= (_.filterNot(_._1.name == "development.conf"))
See here.
I am drools version 5.6.0.Final and facing below issue.
I am using declarative model for creating dynamic pojos.
--> All new pojos goes in decare.drl file
--> All rules using the above pojos(declared in declare.drl) goes in rules.drl file
When first time knowledge base is created, compilation goes well and all rules will be triggred correcly as expected. But when I update a rule in rules.dl file, knowledge agent fails in compiling this rules.drl file as it is has pojos declared in another file i.e. declare.drl
So agent gets only rules.drl file for compilation due to this is the only file updated and he does not get declare.drl. So incremental build fails.
Getting the below error: Server is a pojo declared in declare.drl file
Error Unable to resolve ObjectType 'Server'
Please let me know how to make it work and agent should be able to compile it properly.
Thank you in advance,
Hariprasad Taduru
I would like to follow up on this question gwt-serialization-policy-hosted-mode-out-of-sync. In short - when I do a RPC from hosted browser then this call fails on server with the exception.
INFO: GwtRpcEventSrvc: ERROR: The serialization policy file '/84EC7BA65AF8175BAA99B47877FDE163.gwt.rpc' was not found; did you forget to include it in this deployment?
SEVERE: GwtRpcEventSrvc: WARNING: Failed to get the SerializationPolicy '84EC7BA65AF8175BAA99B47877FDE163' for module 'http://host:19980/MYAPP/'; a legacy, 1.3.3 compatible, serialization policy will be used. Youmay experience SerializationExceptions as a result.
SEVERE: Exception while dispatching incoming RPC call
While when I do the same RPC from browser then the request is performed successfully on server.
In addition I observed a strange behavior of GWT compiler that could result in a problem with hosted browser mode.
I assume when I do two subsequent compilations of an exact same code then the result of the individual compilations is supposed to be same. I mean at least the xxxxx.html and yyyyy.gwt.rpc files have to be same. (Where xxxxx and yyyyy are the long numbers such as 84EC7BA65AF8175BAA99B47877FDE163.)
Currently I have two versions of my project.
An old project compiled by GWT 1.7 that does not suffer from problem with the hosted browser described in gwt-serialization-policy-hosted-mode-out-of-sync
A new project that is compiled by GWT 2.0.4. This new project is based on the old project. This project suffer from the hosted browser problem.
Case 1: Old project with GWT 1.7
I took my old project that was compiled by GWT 1.7. I did two compilations and I compared compilation artifacts. gwt.rpc files were same while html files had different content and name. Since the gwt.rpc files were alwas same I did not have a problem with hosted browser.
Case 2: New project with GWT 2.0.4
I compiled it twice and both gwt.rpc and html files were different. Therefore RPC call in hosted browser failed on server because of missing gwt.rpc file.
Case 3: Old project with GWT 2.0.4
I compiled it twice and both gwt.rpc and html files were different. Therefore RPC call in hosted browser failed on server because of missing gwt.rpc file.
I did some investigation and identified that when I comment out a data member in a class Data that is transmitted from server to client, then compiled files start being same.
class Data implements IsSerializable {
List<IsSerializable> data;
...
}
I wanted to do same thing in the new project but it seems that there are many classes to be modified. So the problem is growing as the project is growing.
I don't know what to use instead of
List<IsSerializable> data;
to transfer data.
You need to read some more on GWT serialization policies:
Serializable Types
Usually you don't mingle with .rpc files unless your are doing advanced RPC calls directly to your server.
Your serializable object :
class Data implements IsSerializable {
List<IsSerializable> data;
...
}
A user-defined class is serializable if all of the following apply:
It is assignable to IsSerializable or Serializable, either because it directly implements one of these interfaces or because it derives from a superclass that does
All non-final, non-transient instance fields are themselves serializable, and
As of GWT 1.5, it must have a default (zero argument) constructor (with any access modifier) or no constructor at all.
So you should probably have something like:
class Data implements IsSerializable {
List<YOUR_TYPE> data;
...
}
Your list's template is a type, you don't set 'IsSerializable'... it should be something like :
List<int> data;
I have a Solution which contains a Web project and a Class Library project. The Class library project contains Enterprise library 5.0 and app.config. When I try to perform a Microsoft.Practices.EnterpriseLibrary.Logging.Logger.Write, I get the following exception:
Resolution of the dependency failed,
type =
"Microsoft.Practices.EnterpriseLibrary.Logging.LogWriter",
name = "(none)". Exception occurred
while: while resolving. Exception is:
InvalidOperationException - The type
LogWriter cannot be constructed. You
must configure the container to supply
this value.
----------------------------------------------- At the time of the exception, the
container was: Resolving
Microsoft.Practices.EnterpriseLibrary.Logging.LogWriter,(none)
If I move all the class files to the web project and have the Enterprise library configuration in the Web.config, everything works fine. I guess the issue is that the Enterprise library is not detecting the app.config which contains all the configuration.
Kindly help me with this regard.
Thanks in advance.
.NET dlls don't have config files. AppDomains do. You cannot put any configuration in a dll's "app.config" file and expect it to get automatically picked up. This is the way .NET config files work; it's not that "entlib is not automatically detecting" it, it's doing what the .NET framework defines the behavior of config files to be.
The answer is to leave the code in the library, but put the configuration in the web app's web.config file. Then everything will just work.
There are more advanced things you can do like manually loading the config file, but they're fairly advanced and, particularly with logging, can cause admin headaches later.