Spring cloud Eureka client not registering - spring-cloud

I have recently updated my micro-services project from Spring boot 2.0.5 to 2.7.4.
Also, updated Spring cloud dependencies to 2021.0.4.
Miragrate from Netflix Zuul to Spring cloud Gateway.
My Eureka Server is running on PORT 8000, which I can access on browser and see the Eureka Dashboard.
Gateway is running on port 8001, which registers successfully with Eureka, and I can see it's status "UP" in the Dashboard.
But no other service are registered on Eureka. The application.yml file of all other services are almost similar to the gateway-service. Below are some code snippets of one of the services :
Problem 1 - Application name UNKNOWN
DiscoveryClient_UNKNOWN/192.168.1.3: registering service...
Problem 2
Request execution error. endpoint=DefaultEndpoint{ serviceUrl='http://localhost:8761/eureka/}, exception=java.net.ConnectException: Connection refused (Connection refused) stacktrace=com.sun.jersey.api.client.ClientHandlerException: java.net.ConnectException: Connection refused (Connection refused)
Eureka-server config
spring:
application:
name: eureka-service
cloud:
config:
uri: ${eureka.instance.hostname}:${server.port}
server:
port: 8000
eureka:
instance:
hostname: localhost
client:
registerWithEureka: false
fetchRegistry: false
service-url:
defaultZone: http://${eureka.instance.hostname}:${server.port}/eureka
server:
enableSelfPreservation: false
logging:
level:
org.springframework: ERROR
com.debugfactory.trypz: ERROR
file:
name: logs/eureka-service.log
Gateway-service Application.yml - This registers with proper name with Eureka
spring:
main:
web-application-type: reactive
allow-bean-definition-overriding: true
application:
name: zuul-service
servlet:
multipart:
max-file-size: 10MB
max-request-size: 10MB
cloud:
gateway:
discovery:
locator:
enabled: true
routes:
- id: sms-service
uri: http://sms-service/
predicates:
- Path=/sms/**
server:
port: 8001
max-thread: 100
eureka:
client:
serviceUrl:
defaultZone: ${EUREKA_URI:http://localhost:8000/eureka/}
instance:
preferIpAddress: true
logging:
file:
name: logs/zuul-service.log
pattern:
file: "%d %-5level [%thread] %logger{0} : %msg%n"
level:
org.springframework: INFO
com.bayview: DEBUG
org.netflix: INFO
SMS-Service <- This tries to look for Eureka port on 8761 with name UNKNOWN
Application.yml
spring:
application:
name: sms-service
data:
mongodb:
authentication-database: admin
database: sms
host: localhost
port: 27017
main:
allow-bean-definition-overriding: true
config:
activate:
on-profile: local
server:
port: 8022
servlet:
context-path: /sms
tomcat:
max-threads: 10
logging:
level:
org.springframework.web: ERROR
org.hibernate: ERROR
com.cryptified.exchange: DEBUG
org.springframework.data: ERROR
org.springframework.security: ERROR
org.springframework.web.client: DEBUG
file:
name: logs/sms-service.log
eureka:
client:
serviceUrl:
defaultZone: ${EUREKA_URI:http://localhost:8000/eureka/}
instance:
preferIpAddress: true

gradle
plugins {
id 'org.springframework.boot' version '2.7.5'
id 'io.spring.dependency-management' version '1.0.15.RELEASE'
id 'java'
}
repositories {
mavenCentral()
}
ext {
set('springCloudVersion', "2021.0.4")
set('testcontainersVersion', "1.17.4")
mapStructVersion = '1.5.3.Final'
}
dependencies {
implementation 'org.springframework.boot:spring-boot-starter-web'
implementation 'org.springframework.boot:spring-boot-starter-data-jpa'
implementation 'org.flywaydb:flyway-core'
implementation 'org.springframework.cloud:spring-cloud-starter-config'
implementation 'org.springframework.cloud:spring-cloud-starter-netflix-eureka-client'
implementation 'org.springframework.boot:spring-boot-starter-aop'
implementation 'org.springframework.boot:spring-boot-starter-actuator'
implementation 'org.springframework.boot:spring-boot-starter-validation'
implementation "org.mapstruct:mapstruct:${mapStructVersion}"
annotationProcessor "org.mapstruct:mapstruct-processor:${mapStructVersion}"
compileOnly 'org.projectlombok:lombok'
runtimeOnly 'org.postgresql:postgresql'
annotationProcessor 'org.springframework.boot:spring-boot-configuration-processor'
annotationProcessor 'org.projectlombok:lombok'
testImplementation 'org.springframework.boot:spring-boot-starter-test'
testImplementation 'org.testcontainers:junit-jupiter'
}
dependencyManagement {
imports {
mavenBom "org.springframework.cloud:spring-cloud-dependencies:${springCloudVersion}"
}
}
.....
application.yml in your project
URI_TO_SERVER_CONFIG: http://localhost:8888
spring:
application:
name: name_project_your
cloud:
config:
uri: ${URI_TO_SERVER_CONFIG}
config:
import: configserver:${URI_TO_SERVER_CONFIG}
profiles:
active: microservice, discoveries, name_project_your
Application configs on git-repository
application-discoveries.yml
SERVER_PORT_SERVICE_DISCOVERY: 8761
IP_ADDRESS_SERVICE_DISCOVERY: http://localhost
eureka:
instance:
instance-id: ${spring.application.name}:${random.uuid}
# https://docs.spring.io/spring-cloud-netflix/docs/current/reference/html/#changing-the-eureka-instance-id
client:
service-url:
defaultZone: ${IP_ADDRESS_SERVICE_DISCOVERY}:${SERVER_PORT_SERVICE_DISCOVERY}/eureka
lease-renewal-interval-in-seconds: 5
name-project-your.yml
DB_IP: 127.0.0.1
DB_PORT: 5432
DB_DATABASE_NAME: name_db
DB_URL: jdbc:postgresql://${DB_IP}:${DB_PORT}/${DB_DATABASE_NAME}
DB_USERNAME: postgres
DB_PASSWORD: postgres
DB_SCHEMA: your_schema
spring:
application:
name: name-project-your
cloud:
config:
enabled: true
discovery:
enabled: true
datasource:
url: ${DB_URL}
username: ${DB_USERNAME}
password: ${DB_PASSWORD}
flyway:
baseline-on-migrate: true
locations: classpath:db/migration
create-schemas: true
default-schema: ${DB_SCHEMA}
application-microservice.yml
eureka:
client:
fetch-registry: true
register-with-eureka: true
logging:
level:
com:
netflix:
eureka: debug
discovery: debug
server:
port: 0

Related

Deploying HA Vault To GKE - dial tcp 127.0.0.1:8200: connect: connection refused

As per the official documentation (https://developer.hashicorp.com/vault/tutorials/kubernetes/kubernetes-google-cloud-gke), the following works as expected:
helm install vault hashicorp/vault \
--set='server.ha.enabled=true' \
--set='server.ha.raft.enabled=true'
I can then run:
kubectl exec vault-0 -- vault status
And this works perfectly fine.
However, I've noticed that when if I don't have raft enabled, I get the dial tcp 127.0.0.1:8200: connect: connection refused" error message:
helm install vault hashicorp/vault \
--set='server.ha.enabled=true'
I'm trying to work out why my Vault deployment is giving me the same issue.
I'm trying to deploy Vault into GKE with auto-unseal keys and a Google Cloud Storage backend configured.
My values.yaml file contains:
global:
enabled: true
tlsDisable: false
injector:
enabled: true
replicas: 1
port: 8080
leaderElector:
enabled: true
image:
repository: "hashicorp/vault-k8s"
tag: "latest"
pullPolicy: IfNotPresent
agentImage:
repository: "hashicorp/vault"
tag: "latest"
authPath: "auth/kubernetes"
webhook:
failurePolicy: Ignore
matchPolicy: Exact
objectSelector: |
matchExpressions:
- key: app.kubernetes.io/name
operator: NotIn
values:
- {{ template "vault.name" . }}-agent-injector
certs:
secretName: vault-lab.company.com-cert
certName: tls.crt
keyName: tls.key
server:
enabled: true
image:
repository: "hashicorp/vault"
tag: "latest"
pullPolicy: IfNotPresent
extraEnvironmentVars:
GOOGLE_APPLICATION_CREDENTIALS: /vault/userconfig/vault-gcs/service-account.json
GOOGLE_REGION: europe-west2
GOOGLE_PROJECT: sandbox-vault-lab
volumes:
- name: vault-gcs
secret:
secretName: vault-gcs
- name: vault-lab-cert
secret:
secretName: vault-lab.company.com-cert
volumeMounts:
- name: vault-gcs
mountPath: /vault/userconfig/vault-gcs
readOnly: true
- name: vault-lab-cert
mountPath: /etc/tls
readOnly: true
service:
enabled: true
type: NodePort
externalTrafficPolicy: Cluster
port: 8200
targetPort: 8200
annotations:
cloud.google.com/app-protocols: '{"http":"HTTPS"}'
beta.cloud.google.com/backend-config: '{"ports": {"http":"config-default"}}'
ha:
enabled: true
replicas: 3
config: |
listener "tcp" {
tls_disable = 0
tls_min_version = "tls12"
address = "[::]:8200"
cluster_address = "[::]:8201"
}
storage "gcs" {
bucket = "vault-lab-bucket"
ha_enabled = "true"
}
service_registration "kubernetes" {}
# Example configuration for using auto-unseal, using Google Cloud KMS. The
# GKMS keys must already exist, and the cluster must have a service account
# that is authorized to access GCP KMS.
seal "gcpckms" {
project = "sandbox-vault-lab"
region = "global"
key_ring = "vault-helm-unseal-kr"
crypto_key = "vault-helm-unseal-key"
}
Something here must be misconfigured, but what, I'm unsure.
Any help would be appreciated.
EDIT:
Even after configuring Raft, I still encounter the same issue:
raft:
enabled: true
setNodeId: false
config: |
ui = false
listener "tcp" {
# tls_disable = 0
address = "[::]:8200"
cluster_address = "[::]:8201"
tls_cert_file = "/etc/tls/tls.crt"
tls_key_file = "/etc/tls/tls.key"
}
#storage "raft" {
# path = "/vault/data"
#}
storage "gcs" {
bucket = "vault-lab-bucket"
ha_enabled = "true"
}
service_registration "kubernetes" {}

KOWL Kafka Connect yaml configuration - has anyone managed to get it right?

getting this error: {"level":"fatal","msg":"failed to load config","error":"failed to unmarshal YAML config into config struct: 1 error(s) decoding:\n\n* '' has invalid keys: connect"}
with the folowing yaml:
kafka:
brokers:
- 192.168.12.12:9092
schemaRegistry:
enabled: true
urls:
- "http://192.168.12.12:8081"
connect:
enabled: true
clusters:
name: xy
url: "http://192.168.12.12:8091"
tls:
enabled: false
username: 1
password: 1
name: xya
url: http://192.168.12.12:8092
Try downgrade your image back to v1.5.0.
Seems that there's a mistake in master recently.
You could find all the images here

UI 404 - Vault Kubernetes

I'm testing out Vault in Kubernetes and am installing via the Helm chart. I've created an overrides file, it's an amalgamation of a few different pages from the official docs.
The pods seem to come up OK and into Ready status and I can unseal vault manually using 3 of the keys generated. I'm having issues getting 404 when browsing the UI though, the UI is presented externally on a Load Balancer in AKS. Here's my config:
global:
enabled: true
tlsDisable: false
injector:
enabled: false
server:
readinessProbe:
enabled: true
path: "/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204"
# livenessProbe:
# enabled: true
# path: "/v1/sys/health?standbyok=true"
# initialDelaySeconds: 60
extraEnvironmentVars:
VAULT_CACERT: /vault/userconfig/vault-server-tls/vault.ca
extraVolumes:
- type: secret
name: vault-server-tls # Matches the ${SECRET_NAME} from above
standalone:
enabled: true
config: |
listener "tcp" {
address = "[::]:8200"
cluster_address = "[::]:8201"
tls_cert_file = "/vault/userconfig/vault-server-tls/vault.crt"
tls_key_file = "/vault/userconfig/vault-server-tls/vault.key"
tls_client_ca_file = "/vault/userconfig/vault-server-tls/vault.ca"
}
storage "file" {
path = "/vault/data"
}
# Vault UI
ui:
enabled: true
serviceType: "LoadBalancer"
serviceNodePort: null
externalPort: 443
# For Added Security, edit the below
# loadBalancerSourceRanges:
# 5.69.25.6/32
I'm still trying to get to grips with Vault. My liveness probe is commented out because it was permanently failing and causing the pod to be re-scheduled, even though checking the vault service status it appeared to be healthy and awaiting an unseal. That's a side issue though compared to the UI, just mentioning in case the failing liveness is related.
Thanks!
So, I don't think the documentation around deploying in Kubernetes from Helm is really that clear but I was basically missing a ui = true flag from the HCL config stanza. It's to be noted that this is in addition to the value passed to the helm chart:
# Vault UI
ui:
enabled: true
serviceType: "LoadBalancer"
serviceNodePort: null
externalPort: 443
Which I had mistakenly assumed was enough to enable the UI.
Here's the config now, with working UI:
global:
enabled: true
tlsDisable: false
injector:
enabled: false
server:
readinessProbe:
enabled: true
path: "/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204"
extraEnvironmentVars:
VAULT_CACERT: /vault/userconfig/vault-server-tls/vault.ca
extraVolumes:
- type: secret
name: vault-server-tls # Matches the ${SECRET_NAME} from above
standalone:
enabled: true
config: |
ui = true
listener "tcp" {
address = "[::]:8200"
cluster_address = "[::]:8201"
tls_cert_file = "/vault/userconfig/vault-server-tls/vault.crt"
tls_key_file = "/vault/userconfig/vault-server-tls/vault.key"
tls_client_ca_file = "/vault/userconfig/vault-server-tls/vault.ca"
}
storage "file" {
path = "/vault/data"
}
# Vault UI
ui:
enabled: true
serviceType: "LoadBalancer"
serviceNodePort: null
externalPort: 443

SERVICE UNAVAILABLE - No raft leader when trying to create channel in Hyperledger fabric setup in Kubernetes

Start_orderer.sh file:
#edit *values.yaml file to be used with helm chart and deploy orderer through it
consensus_type=etcdraft
#change below instantiated variable for changing configuration of persistent volume sizes
persistence_status=true
persistent_volume_size=2Gi
while getopts "i:o:O:d:" c
do
case $c in
i) network_id=$OPTARG ;;
o) number=$OPTARG ;;
O) org_name=$OPTARG ;;
d) domain=$OPTARG ;;
esac
done
network_path=/etc/zeeve/fabric/${network_id}
source status.sh
cp ../yaml-files/orderer.yaml $network_path/yaml-files/orderer-${number}${org_name}_values.yaml
sed -i "s/persistence_status/$persistence_status/; s/persistent_volume_size/$persistent_volume_size/; s/consensus_type/$consensus_type/; s/number/$number/g; s/org_name/${org_name}/; s/domain/$domain/; " $network_path/yaml-files/orderer-${number}${org_name}_values.yaml
helm install orderer-${number}${org_name} --namespace blockchain-${org_name} -f $network_path/yaml-files/orderer-${number}${org_name}_values.yaml `pwd`/../helm-charts/hlf-ord
cmd_success $? orderer-${number}${org_name}
#update state of deployed componenet, used for pod level operations like start, stop, restart etc
update_statusfile helm orderer_${number}${org_name} orderer-${number}${org_name}
update_statusfile persistence orderer_${number}${org_name} $persistence_status
Configtx.yaml:
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
Organizations:
- &Orginvestor
Name: investor
ID: investorMSP
MSPDir: ./crypto-config/investor/msp
AnchorPeers:
- Host: peer1.investor.demointainabs.emulya.com
Port: 443
- &Orgtrustee
Name: trustee
ID: trusteeMSP
MSPDir: ./crypto-config/trustee/msp
AnchorPeers:
- Host: peer1.trustee.demointainabs.emulya.com
Port: 443
- &Orgwhlender
Name: whlender
ID: whlenderMSP
MSPDir: ./crypto-config/whlender/msp
AnchorPeers:
- Host: peer1.whlender.demointainabs.emulya.com
Port: 443
- &Orgservicer
Name: servicer
ID: servicerMSP
MSPDir: ./crypto-config/servicer/msp
AnchorPeers:
- Host: peer1.servicer.demointainabs.emulya.com
Port: 443
- &Orgissuer
Name: issuer
ID: issuerMSP
MSPDir: ./crypto-config/issuer/msp
AnchorPeers:
- Host: peer1.issuer.demointainabs.emulya.com
Port: 443
- &Orgoriginator
Name: originator
ID: originatorMSP
MSPDir: ./crypto-config/originator/msp
AnchorPeers:
- Host: peer1.originator.demointainabs.emulya.com
Port: 443
- &Orginvestor
Name: investor
ID: investorMSP
MSPDir: ./crypto-config/investor/msp
AnchorPeers:
- Host: peer1.investor.intainabs.emulya.com
Port: 443
- &Orgtrustee
Name: trustee
ID: trusteeMSP
MSPDir: ./crypto-config/trustee/msp
AnchorPeers:
- Host: peer1.trustee.intainabs.emulya.com
Port: 443
- &Orgwhlender
Name: whlender
ID: whlenderMSP
MSPDir: ./crypto-config/whlender/msp
AnchorPeers:
- Host: peer1.whlender.intainabs.emulya.com
Port: 443
- &Orgservicer
Name: servicer
ID: servicerMSP
MSPDir: ./crypto-config/servicer/msp
AnchorPeers:
- Host: peer1.servicer.intainabs.emulya.com
Port: 443
- &Orgissuer
Name: issuer
ID: issuerMSP
MSPDir: ./crypto-config/issuer/msp
AnchorPeers:
- Host: peer1.issuer.intainabs.emulya.com
Port: 443
- &Orgoriginator
Name: originator
ID: originatorMSP
MSPDir: ./crypto-config/originator/msp
AnchorPeers:
- Host: peer1.originator.intainabs.emulya.com
Port: 443
Orderer: &OrdererDefaults
OrdererType: etcdraft
Addresses:
- orderer1.originator.demointainabs.emulya.com:443
- orderer2.trustee.demointainabs.emulya.com:443
- orderer2.issuer.demointainabs.emulya.com:443
- orderer1.trustee.demointainabs.emulya.com:443
- orderer1.issuer.demointainabs.emulya.com:443
- orderer1.originator.intainabs.emulya.com:443
- orderer2.trustee.intainabs.emulya.com:443
- orderer2.issuer.intainabs.emulya.com:443
- orderer1.trustee.intainabs.emulya.com:443
- orderer1.issuer.intainabs.emulya.com:443
BatchTimeout: 2s
BatchSize:
MaxMessageCount: 10
AbsoluteMaxBytes: 99 MB
PreferredMaxBytes: 512 KB
Kafka:
Brokers:
- kafka-hlf.blockchain-kz.svc.cluster.local:9092
EtcdRaft:
Consenters:
- Host: orderer1.originator.demointainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/originator/orderer-1originator/tls/server.crt
ServerTLSCert: crypto-config/originator/orderer-1originator/tls/server.crt
- Host: orderer2.trustee.demointainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/trustee/orderer-2trustee/tls/server.crt
ServerTLSCert: crypto-config/trustee/orderer-2trustee/tls/server.crt
- Host: orderer2.issuer.demointainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/issuer/orderer-2issuer/tls/server.crt
ServerTLSCert: crypto-config/issuer/orderer-2issuer/tls/server.crt
- Host: orderer1.trustee.demointainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/trustee/orderer-1trustee/tls/server.crt
ServerTLSCert: crypto-config/trustee/orderer-1trustee/tls/server.crt
- Host: orderer1.issuer.demointainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/issuer/orderer-1issuer/tls/server.crt
ServerTLSCert: crypto-config/issuer/orderer-1issuer/tls/server.crt
- Host: orderer1.originator.intainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/originator/orderer-1originator/tls/server.crt
ServerTLSCert: crypto-config/originator/orderer-1originator/tls/server.crt
- Host: orderer2.trustee.intainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/trustee/orderer-2trustee/tls/server.crt
ServerTLSCert: crypto-config/trustee/orderer-2trustee/tls/server.crt
- Host: orderer2.issuer.intainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/issuer/orderer-2issuer/tls/server.crt
ServerTLSCert: crypto-config/issuer/orderer-2issuer/tls/server.crt
- Host: orderer1.trustee.intainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/trustee/orderer-1trustee/tls/server.crt
ServerTLSCert: crypto-config/trustee/orderer-1trustee/tls/server.crt
- Host: orderer1.issuer.intainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/issuer/orderer-1issuer/tls/server.crt
ServerTLSCert: crypto-config/issuer/orderer-1issuer/tls/server.crt
Organizations:
Application: &ApplicationDefaults
Organizations:
Profiles:
BaseGenesis:
Orderer:
<<: *OrdererDefaults
Organizations:
- *Orgoriginator
- *Orgtrustee
- *Orgissuer
- *Orgoriginator
- *Orgtrustee
- *Orgissuer
Consortiums:
MyConsortium:
Organizations:
- *Orginvestor
- *Orgtrustee
- *Orgwhlender
- *Orgservicer
- *Orgissuer
- *Orgoriginator
- *Orginvestor
- *Orgtrustee
- *Orgwhlender
- *Orgservicer
- *Orgissuer
- *Orgoriginator
BaseChannel:
Consortium: MyConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *Orgoriginator
- *Orgissuer
- *Orgservicer
- *Orgwhlender
- *Orgtrustee
- *Orginvestor
- *Orgoriginator
- *Orgissuer
- *Orgservicer
- *Orgwhlender
- *Orgtrustee
- *Orginvestor
I am currently doing hyperledger fabric network setup in Kubernetes. My network includes, 6 organizations and 5 orderer nodes. Our orderers are made to follow raft consensus. I have done the following:
Setup ca and tlsca servers
Setup ingress controller
Generated crypto-materials for peers, orderer
Generated channel artifacts
-- Started peers and orderers
Next step is to create the channel on orderer for each orgs and join the peers in each org to the channel. I am unable to create the channel. When requesting to create the channel, getting the following error:
SERVICE UNAVAILABLE - No raft leader.
How to fix this issue??
Can anyone please guide me on this. Thanks in advance.

Grails 3.3.0 M1 and 3.3.0 M2 Combining MongoDB and Hibernate fail

Step1: use command create app
grails create-app test
Step2: create domain
grails create-domain-class test
Step3: configue build.gradle
compile 'org.grails.plugins:mongodb'
Step4: configue application.yml
grails:
mongodb:
host: "localhost"
port: 27017
username: ""
password: ""
databaseName: "test"
environments:
development:
dataSource:
dbCreate: none
url:jdbc:h2:mem:devDb;MVCC=TRUE;LOCK_TIMEOUT=10000;DB_CLOSE_ON_EXIT=FALSE
logSql: true
Step5: use mapWith in Test.groovy
static mapWith="mongo"
Step6: query test in bootstrap
println Test.findAll()
Stop7: run grails app
grails run-app
the error is :
org.springframework.jdbc.BadSqlGrammarException: Hibernate operation: could not prepare statement; bad SQL grammar [select this_.id as id1_0_0_, this_.version as version2_0_0_ from test this_]; nested exception is org.h2.jdbc.JdbcSQLException: Table "TEST" not found; SQL statement:
select this_.id as id1_0_0_, this_.version as version2_0_0_ from test this_ [42102-194]
at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.doTranslate(SQLErrorCodeSQLExceptionTranslator.java:231)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:73)
at org.grails.orm.hibernate.GrailsHibernateTemplate.convertJdbcAccessException(GrailsHibernateTemplate.java:723)
at org.grails.orm.hibernate.GrailsHibernateTemplate.convertHibernateAccessException(GrailsHibernateTemplate.java:711)
at org.grails.orm.hibernate.GrailsHibernateTemplate.doExecute(GrailsHibernateTemplate.java:302)
at org.grails.orm.hibernate.GrailsHibernateTemplate.execute(GrailsHibernateTemplate.java:242)
at org.grails.orm.hibernate.GrailsHibernateTemplate.execute(GrailsHibernateTemplate.java:116)
at org.grails.orm.hibernate.HibernateGormStaticApi.list(HibernateGormStaticApi.groovy:74)
at org.grails.datastore.gorm.GormStaticApi.findAll(GormStaticApi.groovy:579)
at org.grails.datastore.gorm.GormEntity$Trait$Helper.findAll(GormEntity.groovy:671)
at org.grails.datastore.gorm.GormEntity$Trait$Helper$findAll$0.call(Unknown Source)
use other version :
3.2.3 is OK
3.2.5 is OK
3.2.9 is Ok
My code is :
the demo in github
what change in 3.3.0.M1 and 3.3.0.M2 ?
grails:
mongodb:
host: "localhost"
port: 27017
username: ""
password: ""
databaseName: "test"
environments:
development:
dataSource:
dbCreate: none
url: jdbc:h2:mem:devDb;MVCC=TRUE;LOCK_TIMEOUT=10000;DB_CLOSE_ON_EXIT=FALSE
logSql: true
test:
dataSource:
dbCreate: update
url: jdbc:h2:mem:testDb;MVCC=TRUE;LOCK_TIMEOUT=10000;DB_CLOSE_ON_EXIT=FALSE
production:
dataSource:
dbCreate: none
url: jdbc:h2:./prodDb;MVCC=TRUE;LOCK_TIMEOUT=10000;DB_CLOSE_ON_EXIT=FALSE
properties:
jmxEnabled: true
initialSize: 5
maxActive: 50
minIdle: 5
maxIdle: 25
maxWait: 10000
maxAge: 600000
timeBetweenEvictionRunsMillis: 5000
minEvictableIdleTimeMillis: 60000
validationQuery: SELECT 1
validationQueryTimeout: 3
validationInterval: 15000
testOnBorrow: true
testWhileIdle: true
testOnReturn: false
jdbcInterceptors: ConnectionState
defaultTransactionIsolation: 2 # TRANSACTION_READ_COMMITTED
Replace your these lines to this
environments:
development:
grails:
mongodb:
host: "localhost"
port: 27017
username: ""
password: ""
databaseName: "test"
test:
mongodb:
host: "localhost"
port: 27017
username: ""
password: ""
databaseName: "test"
production:
grails:
mongodb:
host: "localhost"
port: 27017
username: ""
password: ""
databaseName: "test"
properties:
jmxEnabled: true
initialSize: 5
maxActive: 50
minIdle: 5
maxIdle: 25
maxWait: 10000
maxAge: 600000
timeBetweenEvictionRunsMillis: 5000
minEvictableIdleTimeMillis: 60000
validationQuery: SELECT 1
validationQueryTimeout: 3
validationInterval: 15000
testOnBorrow: true
testWhileIdle: true
testOnReturn: false
jdbcInterceptors: ConnectionState
defaultTransactionIsolation: 2 # TRANSACTION_READ_COMMITTED