Build micronaut native image with hikari datasource - hikaricp

Faced with a problem running micronaut application that was packed in native-image.
I created simple demo application with micronaut-data-hibernate-jpa and based on documentation I need to add some db connection pool. I chose hikari and added such dependency micronaut-jdbc-hikari.
I use maven as build tool and add plugin to build native-image native-image-maven-plugin
native-image.properties
Args = -H:IncludeResources=logback.xml|application.yml|bootstrap.yml \
-H:Name=demo \
-H:Class=com.example.Application \
-H:+TraceClassInitialization \
--initialize-at-run-time=org.apache.commons.logging.LogAdapter$Log4jLog,org.hibernate.secure.internal.StandardJaccServiceImpl,org.postgresql.sspi.SSPIClient,org.hibernate.dialect.OracleTypesHelper \
--initialize-at-build-time=org.postgresql.Driver,org.postgresql.util.SharedTimer,org.hibernate.engine.spi.EffectiveEntityGraph,org.hibernate.engine.spi.LoadQueryInfluencers
When I run application with the jvm then everything works. But when I try to run same application that was packed as native-image then I get such error
Caused by: java.lang.IllegalArgumentException: Class com.zaxxer.hikari.util.ConcurrentBag$IConcurrentBagEntry[] is instantiated reflectively but was never registered. Register the class by using org.graalvm.nativeimage.hosted.RuntimeReflection
at com.oracle.svm.core.graal.snippets.SubstrateAllocationSnippets.arrayHubErrorStub(SubstrateAllocationSnippets.java:280)
at java.lang.ThreadLocal$SuppliedThreadLocal.initialValue(ThreadLocal.java:305)
at java.lang.ThreadLocal.setInitialValue(ThreadLocal.java:195)
at java.lang.ThreadLocal.get(ThreadLocal.java:172)
at com.zaxxer.hikari.util.ConcurrentBag.borrow(ConcurrentBag.java:129)
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:179)
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:161)
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:100)
at org.hibernate.engine.jdbc.connections.internal.DatasourceConnectionProviderImpl.getConnection(DatasourceConnectionProviderImpl.java:122)
at org.hibernate.internal.NonContextualJdbcConnectionAccess.obtainConnection(NonContextualJdbcConnectionAccess.java:38)
at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.acquireConnectionIfNeeded(LogicalConnectionManagedImpl.java:104)
at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.getPhysicalConnection(LogicalConnectionManagedImpl.java:134)
at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.getConnectionForTransactionManagement(LogicalConnectionManagedImpl.java:250)
at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.begin(LogicalConnectionManagedImpl.java:258)
at org.hibernate.resource.transaction.backend.jdbc.internal.JdbcResourceLocalTransactionCoordinatorImpl$TransactionDriverControlImpl.begin(JdbcResourceLocalTransactionCoordinatorImpl.java:246)
at org.hibernate.engine.transaction.internal.TransactionImpl.begin(TransactionImpl.java:83)
at org.hibernate.internal.AbstractSharedSessionContract.beginTransaction(AbstractSharedSessionContract.java:471)
at io.micronaut.transaction.hibernate5.HibernateTransactionManager.doBegin(HibernateTransactionManager.java:352)
... 99 common frames omitted
UPDATE/SOLUTION
Based on #Airy answer I have added reflection config in native-image.properties. In my case it is looks like so
[
{
"name" : "com.zaxxer.hikari.util.ConcurrentBag",
"allDeclaredConstructors" : true,
"allPublicConstructors" : true,
"allDeclaredMethods" : true,
"allPublicMethods" : true,
"allDeclaredClasses" : true,
"allPublicClasses" : true
},
{
"name" : "com.zaxxer.hikari.pool.PoolEntry",
"allDeclaredConstructors" : true,
"allPublicConstructors" : true,
"allDeclaredMethods" : true,
"allPublicMethods" : true,
"allDeclaredClasses" : true,
"allPublicClasses" : true
}
]
And another solution is to change scope of hikari dependency to compile and add missed fields/classes into hints annotation like so
#TypeHint(value = {
PostgreSQL95Dialect.class,
SessionFactoryImpl.class,
org.postgresql.PGProperty.class,
UUIDGenerator.class,
com.zaxxer.hikari.util.ConcurrentBag.class, // In my case I have just added this line
}, accessType = {TypeHint.AccessType.ALL_PUBLIC})
whole example you can find here

You should declare reflection configuration in your native-image.properties with -H:ReflectionConfigurationFiles=/path/to/reflectconfig
Here is the documentation for doing so

Related

zod-prisma keeps getting stuck at the generation step

I just installed zod-prisma to try it out, but so far nothing is working.
I attempted to run it on two separate projects, one has a small prisma and the other has a relatively large one.
Both didn't work, and remained stuck.
I can't share the schemas unfortunately. So, any ideas what could the problem be?
I also enabled strict in tsconfig, still the same.
prisma : 3.6.0
#prisma/client : 3.6.0
Current platform : debian-openssl-1.1.x
Query Engine (Node-API) : libquery-engine dc520b92b1ebb2d28dc3161f9f82e875bd35d727 (at node_modules/#prisma/engines/libquery_engine-debian-openssl-1.1.x.so.node)
Migration Engine : migration-engine-cli dc520b92b1ebb2d28dc3161f9f82e875bd35d727 (at node_modules/#prisma/engines/migration-engine-debian-openssl-1.1.x)
Introspection Engine : introspection-core dc520b92b1ebb2d28dc3161f9f82e875bd35d727 (at node_modules/#prisma/engines/introspection-engine-debian-openssl-1.1.x)
Format Binary : prisma-fmt dc520b92b1ebb2d28dc3161f9f82e875bd35d727 (at node_modules/#prisma/engines/prisma-fmt-debian-openssl-1.1.x)
Default Engines Hash : dc520b92b1ebb2d28dc3161f9f82e875bd35d727
Studio : 0.440.0
typescript": "^4.5.2"
"zod-prisma": "^0.5.4"
{
"compilerOptions": {
"target": "es2018",
"module": "commonjs",
"lib": ["es2018", "esnext.asynciterable"],
"experimentalDecorators": true,
"emitDecoratorMetadata": true,
"declaration": true,
"removeComments": true,
"allowSyntheticDefaultImports": true,
"esModuleInterop": true,
"sourceMap": false,
"outDir": "./dist",
"baseUrl": "./",
"incremental": true,
"skipLibCheck": true,
"strictNullChecks": false,
"noImplicitAny": false,
"strictBindCallApply": false,
"forceConsistentCasingInFileNames": false,
"noFallthroughCasesInSwitch": false,
"strict": true
}
}
I also cloned the example in the repo, but it had no zod-prisma configured at all. So, I added this from README:
generator zod {
provider = "zod-prisma"
output = "./zod"
relationModel = true
// relationModel = "default" // Do not export model without relations.
// relationModel = false // Do not generate related model
modelCase = "PascalCase"
// modelCase = "camelCase" // Output models using camel case (ex. userModel, postModel)
modelSuffix = "Model"
// useDecimalJs = false // (default) represent the prisma Decimal type using as a JS number
useDecimalJs = true
imports = null
// https://www.prisma.io/docs/concepts/components/prisma-client/working-with-fields/working-with-json-fields#filtering-by-null-values
prismaJsonNullability = true
// prismaJsonNullability = false // allows null assignment to optional JSON fields
}
but it also got stuck when I ran npx prisma generate
I use node v14.19.1
I upgraded to latest prisma version:
prisma : 3.13.0
#prisma/client : 3.13.0
Current platform : debian-openssl-1.1.x
Query Engine (Node-API) : libquery-engine efdf9b1183dddfd4258cd181a72125755215ab7b (at node_modules/#prisma/engines/libquery_engine-debian-openssl-1.1.x.so.node)
Migration Engine : migration-engine-cli efdf9b1183dddfd4258cd181a72125755215ab7b (at node_modules/#prisma/engines/migration-engine-debian-openssl-1.1.x)
Introspection Engine : introspection-core efdf9b1183dddfd4258cd181a72125755215ab7b (at node_modules/#prisma/engines/introspection-engine-debian-openssl-1.1.x)
Format Binary : prisma-fmt efdf9b1183dddfd4258cd181a72125755215ab7b (at node_modules/#prisma/engines/prisma-fmt-debian-openssl-1.1.x)
Default Engines Hash : efdf9b1183dddfd4258cd181a72125755215ab7b
Studio : 0.459.0
Still same issue!
On a side note, I would appreciate it if someone with a rep of 1500 or higher could add the zod-prisma tag.
There was a missing dependency, simply install zod.
Though, the generator should have displayed a warning.
I also submitted a PR to fix this issue for good!

Error while generating node info file with database.runMigration

I added database.runMigration: true to my build.gradle file but I'm getting this error when running deployNodes. What's causing this?
[ERROR] 14:05:21+0200 [main] subcommands.ValidateConfigurationCli.logConfigurationErrors$node - Error(s) while parsing node configuration:
- for path: "database.runMigration": Unknown property 'runMigration'
Here's my build.gradle's deployNode task
task deployNodes(type: net.corda.plugins.Cordform, dependsOn: ['jar']) {
directory "./build/nodes"
ext.drivers = ['.jdbc_driver']
ext.extraConfig = [
'dataSourceProperties.dataSourceClassName' : "org.postgresql.ds.PGSimpleDataSource",
'dataSourceProperties.dataSource.user' : "corda",
'dataSourceProperties.dataSource.password' : "corda1234",
'database.transactionIsolationLevel' : 'READ_COMMITTED',
'database.runMigration' : "true"
]
nodeDefaults {
projectCordapp {
deploy = false
}
cordapp project(':cordapp-contracts-states')
cordapp project(':cordapp')
}
node {
name "O=HUS,L=Helsinki,C=FI"
p2pPort 10008
rpcSettings {
address "localhost:10009"
adminAddress "localhost:10049"
}
webPort 10017
rpcUsers = [[ user: "user1", "password": "test", "permissions": ["ALL"]]]
extraConfig = ext.extraConfig + [
'dataSourceProperties.dataSource.url' :
"jdbc:postgresql://localhost:5432/hus_db?currentSchema=corda_schema"
]
drivers = ext.drivers
}
}
The database.runMigration is a Corda Enterprise property only.
To control database migration in Corda Open Source use initialiseSchema.
initialiseSchema
Boolean which indicates whether to update the database schema at startup (or create the schema when node starts for the first time). If set to false on startup, the node will validate if it’s running against a compatible database schema.
Default: true
You may refer to the below link to look out for other database properties which you can set.
https://docs.corda.net/corda-configuration-file.html

How to access vendor extensions in api.mustache regarding imports when endpoints are regrouped by tags

Problem
The issue concerns swagger codegen and using multiple files to define specs.
I have two REST APIs with two different files specs1.yml and specs2.yml.
These specs have some models/schemas/definitions (I use swagger 2.0) in common.
I'd like to factorize these definitions in a core.yml file.
I can then reference these in specs1 and specs2.
The issue is that swagger codegen generates these models as part of specs1 and specs2. What I'd like is processing the core.yml file, generating classes in a core package, and then having the specs1/2 generated classes referencing the classes in the core package when it's a common one.
Technical Stack
Maven / Java / JAXRS-CXF REST API
Maven 3.6.0
Java 1.8.0_201
swagger-jaxrs 1.5.16
CXF 3.1.14
swagger-codegen-maven-plugin 2.4.7
Code Example
Swagger Specs YML
I have a tag named e.g. "Super Tag" in my swagger specs definition.
Multiple endpoints are regrouped under that tag. For the sake of a minimal PoC of my issue, let's go with one endpoint:
specs1.yml
swagger: "2.0"
tags:
- name: "Super Tag"
x-core-imports: [ErrorResponse] # Trying this at tag level
x-imports: [ABody] # Trying this at tag level
paths:
/someEndpointPath:
post:
x-core-imports: [ErrorResponse] # --> import bla.core.api.models.ErrorResponse
x-imports: [ABody] # --> import bla.project.api.models.ABody
tags:
- "Super Tag"
operationId: postToSomeEndpoint
consumes:
- application/json
produces:
- application/json
parameters:
- name: body
in: body
required: true
schema:
$ref: '#/definitions/ABody' # This model is defined in this file
responses:
204:
description: "Successful response"
400:
description: "Bad request error"
schema:
$ref: '../CORE.yml#/definitions/_ErrorResponse'
# I'm importing this model definition from another file
definitions:
ABody:
type: object
field:
type: string
Swagger Codegen Debugging
I tried seeing where the vendor extensions would be added with regards to imports: [{ "import": ... }] (which is what's read from the api.mustache template, see below)
> java -DdebugSupportingFiles -jar modules/swagger-codegen-cli/target/swagger-codegen-cli.jar generate -i specs.yml -l java > result.json
I can see
result.json
"apiInfo" : {
"apis" : { [
"parent" : [ ],
"generatorClass" : "io.swagger.codegen.languages.JavaClientCodegen",
"supportJava6" : false,
"sortParamsByRequiredFlag" : true,
"groupId" : "io.swagger",
"invokerPackage" : "io.swagger.client",
"classVarName" : "superTag",
"developerEmail" : "apiteam#swagger.io",
"generateModelDocs" : true,
"hasImport" : true,
"generateModelTests" : true,
"generateApiTests" : true,
"classFilename" : "SuperTagApi",
"usePlayWS" : false,
"generateModels" : true,
"serializableModel" : false,
"playVersion" : "play25",
"inputSpec" : "specs.yml",
"artifactUrl" : "https://github.com/swagger-api/swagger-codegen",
"developerOrganization" : "Swagger",
"baseName" : "SuperTag",
"package" : "io.swagger.client.api",
"imports" : [ {
"import" : "io.swagger.client.model.ABody"
}, {
"import" : "io.swagger.client.model.ErrorResponse"
} ]
...
] }
}
Swagger Templates
We can see in the api.mustache template
package {{package}};
{{#imports}}import {{import}};
{{/imports}}
I'm using the maven swagger codegen plugin with the following options:
<modelPackage>com.mysite.myproject.api.models</modelPackage>
<apiPackage>com.mysite.myproject.api</apiPackage>
So in my generated java class I will get:
SuperTagApi.class
package com.mysite.myproject.api;
import com.mysite.myproject.api.models.ErrorResponse;
import com.mysite.myproject.api.models.ABody;
What I'd like to do is have a means to tell swaggercodegen that I want one class imported as it is now, but the second imported from another package.
To do that, my idea was using vendor extensions (as you can see above) and manually list the classes that I want imported from a given package (that will actually be generated from the CORE.yml file) and the ones that are defined in my specs.yml where I want the original generated package name.
I tried adding x-core-imports vendor-extension to multiple different places trying to get access to them. None put them at the same level as the imports: [{ "import": ... }] section of the result.json... This is because different endpoints/methods are regrouped under the same file when their tag is identical.
I modified my api.mustache like so:
{{^vendorExtensions.x-imports}}
{{#imports}}import {{import}};
{{/imports}}
{{/vendorExtensions.x-imports}}
{{#vendorExtensions.x-imports}}
{{#vendorExtensions.x-core-imports}}
import com.mysite.core.api.models.{{.}};
{{/vendorExtensions.x-core-imports}}
import com.mysite.myproject.api.models.{{.}};
{{/vendorExtensions.x-imports}}
Do you know at which level in the yml file I have to put my vendor extensions to be able to access them from api.mustache? (Without modifying swagger codegen, just modifying templates and yml specs files)

Problem with cloudformation stack update and launch template version / autoscaling group

I have a stack in cloudformation (ECS cluster, App LB, Autoscaling Group, launch templates, etc etc.) It all works fine and we have been using this in production and pre production environments for a while.
A problem recently arose while trying to push a stack update. I made some changes to UserData in the AWS::EC2::LaunchTemplate. If i launch a new stack from this template it works great.
BUT:
If i make a change set and apply a stack update cloudformation creates a NEW launch template version -however- the autoscaling group still references the OLD version.
Looking at the AWS docs for AWS::AutoScaling::AutoScalingGroup LaunchTemplateSpecification
I see:
"AWS CloudFormation does not support specifying $Latest, or $Default for the template version number."
Anyone wrangled w/ stack updates creating new versions of resources that need to be referenced elsewhere? I feel like i am missing something obvious.
yay, i'm dumb:
use Fn::GetAtt
ok, make fun of me for using json not yaml
...
"ECSAutoScalingGroup": {
"Type": "AWS::AutoScaling::AutoScalingGroup",
"Properties": {
"VPCZoneIdentifier": {"Ref" : "Subnets"},
"MinSize": "1",
"MaxSize": "10",
"DesiredCapacity": { "Ref": "DesiredInstanceCount" },
"MixedInstancesPolicy": {
"InstancesDistribution" :
{
"OnDemandBaseCapacity" : "0",
"OnDemandPercentageAboveBaseCapacity" : { "Ref" : "PercentOnDemand"}
},
"LaunchTemplate" : {
"LaunchTemplateSpecification" : {
"LaunchTemplateId" : {"Ref" : "ECSLaunchTemplate"},
"Version" : { "Fn::GetAtt" : [ "ECSLaunchTemplate", "LatestVersionNumber" ] }
},
"Overrides" : [ {"InstanceType": "m5.xlarge"},{"InstanceType": "t3.xlarge"},{"InstanceType": "m4.xlarge" },{"InstanceType": "r4.xlarge"},{"InstanceType": "c4.xlarge"}]
}
}
},
...

Spark REST API: Failed to find data source: com.databricks.spark.csv

I have a pyspark file stored on s3. I am trying to run it using spark REST API.
I am running the following command:
curl -X POST http://<ip-address>:6066/v1/submissions/create --header "Content-Type:application/json;charset=UTF-8" --data '{
"action" : "CreateSubmissionRequest",
"appArgs" : [ "testing.py"],
"appResource" : "s3n://accessKey:secretKey/<bucket-name>/testing.py",
"clientSparkVersion" : "1.6.1",
"environmentVariables" : {
"SPARK_ENV_LOADED" : "1"
},
"mainClass" : "org.apache.spark.deploy.SparkSubmit",
"sparkProperties" : {
"spark.driver.supervise" : "false",
"spark.app.name" : "Simple App",
"spark.eventLog.enabled": "true",
"spark.submit.deployMode" : "cluster",
"spark.master" : "spark://<ip-address>:6066",
"spark.jars" : "spark-csv_2.10-1.4.0.jar",
"spark.jars.packages" : "com.databricks:spark-csv_2.10:1.4.0"
}
}'
and the testing.py file has a code snippet:
myContext = SQLContext(sc)
format = "com.databricks.spark.csv"
dataFrame1 = myContext.read.format(format).option("header", "true").option("inferSchema", "true").option("delimiter",",").load(location1).repartition(1)
dataFrame2 = myContext.read.format(format).option("header", "true").option("inferSchema", "true").option("delimiter",",").load(location2).repartition(1)
outDataFrame = dataFrame1.join(dataFrame2, dataFrame1.values == dataFrame2.valuesId)
outDataFrame.write.format(format).option("header", "true").option("nullValue","").save(outLocation)
But on this line:
dataFrame1 = myContext.read.format(format).option("header", "true").option("inferSchema", "true").option("delimiter",",").load(location1).repartition(1)
I get exception:
java.lang.ClassNotFoundException: Failed to find data source: com.databricks.spark.csv. Please find packages at http://spark-packages.org
Caused by: java.lang.ClassNotFoundException: com.databricks.spark.csv.DefaultSource
I was trying different things out and one of those things was that I logged into the ip-address machine and ran this command:
./bin/spark-shell --packages com.databricks:spark-csv_2.10:1.4.0
so that It would download the spark-csv in .ivy2/cache folder. But that didn't solve the problem. What am I doing wrong?
(Posted on behalf of the OP).
I first added spark-csv_2.10-1.4.0.jar on driver and worker machines. and added
"spark.driver.extraClassPath" : "absolute/path/to/spark-csv_2.10-1.4.0.jar",
"spark.executor.extraClassPath" : "absolute/path/to/spark-csv_2.10-1.4.0.jar",
Then I got following error:
java.lang.NoClassDefFoundError: org/apache/commons/csv/CSVFormat
Caused by: java.lang.ClassNotFoundException: org.apache.commons.csv.CSVFormat
And then I added commons-csv-1.4.jar on both machines and added:
"spark.driver.extraClassPath" : "/absolute/path/to/spark-csv_2.10-1.4.0.jar:/absolute/path/to/commons-csv-1.4.jar",
"spark.executor.extraClassPath" : "/absolute/path/to/spark-csv_2.10-1.4.0.jar:/absolute/path/to/commons-csv-1.4.jar",
And that solved my problem.