Grails Spring Security REST Plugin - Token Storage Failure - rest

I'm setting up a Grails project with the Spring Security REST Plugin and I'm having some trouble. When I make the following request to /api/login with valid username and password
Accept: application/json
Content-Type: application/json
{
"username": "validuser",
"password": "validpassword"
}
I get the following exceptions
Error 2014-08-09 11:30:04,839 [http-bio-8080-exec-6] ERROR [/myphotoid-api].[default] - Servlet.service() for servlet [default] in context with path [/myphotoid-api] threw exception
Message: java.lang.Class cannot be cast to java.lang.String
Line | Method
->> 38 | storeToken in com.odobo.grails.plugin.springsecurity.rest.token.storage.GormTokenStorageService
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
| 97 | doFilter in com.odobo.grails.plugin.springsecurity.rest.RestAuthenticationFilter
| 82 | doFilter . in grails.plugin.springsecurity.web.authentication.logout.MutableLogoutFilter
| 63 | doFilter in com.odobo.grails.plugin.springsecurity.rest.RestLogoutFilter
| 82 | doFilter . in com.brandseye.cors.CorsFilter
| 1145 | runWorker in java.util.concurrent.ThreadPoolExecutor
| 615 | run . . . in java.util.concurrent.ThreadPoolExecutor$Worker
^ 745 | run in java.lang.Thread
and then my client receives a 302 to /login/auth, the regular stateful login page. :(
However if I make the following request to /api/login with an invalid username and password
Accept: application/json
Content-Type: application/json
{
"username": "validuser",
"password": "badpassword"
}
I get a 401, which I guess is what I should expect.
Here is the valid section from my Config.groovy
// Added by the Spring Security Core plugin:
grails.plugin.springsecurity.userLookup.userDomainClassName = 'com.campuscardtools.myphotoid.Person'
grails.plugin.springsecurity.userLookup.authorityJoinClassName = 'com.campuscardtools.myphotoid.PersonRole'
grails.plugin.springsecurity.authority.className = 'com.campuscardtools.myphotoid.Role'
grails.plugin.springsecurity.controllerAnnotations.staticRules = [
'/api/login': ['permitAll'],
'/': ['permitAll'],
'/index': ['permitAll'],
'/index.gsp': ['permitAll'],
'/assets/**': ['permitAll']
]
grails.plugin.springsecurity.rest.token.storage.useGorm = true
grails.plugin.springsecurity.rest.token.storage.gorm.tokenDomainClassName = com.campuscardtools.myphotoid.AuthenticationToken
grails.plugin.springsecurity.filterChain.chainMap = [
'/api/**': 'JOINED_FILTERS,-exceptionTranslationFilter,-authenticationProcessingFilter,-securityContextPersistenceFilter', // Stateless chain
'/**': 'JOINED_FILTERS,-restTokenValidationFilter,-restExceptionTranslationFilter' // Traditional chain
]
Thank you in advance for your help!

#kau Thanks for the helpful comment.
Looks like your tokenDomainClassName needs to be enclosed within quotes – kau Aug 22 at 14:01
so I changed this
grails.plugin.springsecurity.rest.token.storage.gorm.tokenDomainClassName = com.campuscardtools.myphotoid.AuthenticationToken
to this
grails.plugin.springsecurity.rest.token.storage.gorm.tokenDomainClassName = 'com.campuscardtools.myphotoid.AuthenticationToken'

Check plugin confiugraiton section in documentation : http://alvarosanchez.github.io/grails-spring-security-rest/docs/guide/configuration.html
You have to configure chains properly in grails.plugin.springsecurity.filterChain.chainMap:
grails.plugin.springsecurity.filterChain.chainMap = [
'/api/**': 'JOINED_FILTERS,-exceptionTranslationFilter,-authenticationProcessingFilter,-securityContextPersistenceFilter', // Stateless chain
'/**': 'JOINED_FILTERS,-restTokenValidationFilter,-restExceptionTranslationFilter' // Traditional chain
]

Related

using quarkus jwt token in micronaut application

I have two microservices
auth-service developed using Quarkus to generate JWT token.
i developed the 2nd service using Micronaut (service1). I need to authenticate end points using auth-service. Can anyone please explain how to achieve it.
Please find the both services in the following
https://github.com/microservices-j/auth-service
https://github.com/microservices-j/service1
I generated the token using auth-service
and i pass the token into service1, but i am getting unauthorized.
> > Task :run __ __ _ _ | \/ (_) ___ _ __ ___ _ __ __ _ _ _| |_ | |\/| | |/ __| '__/ _ \| '_ \ / _` | | | | __| | | | | | (__| | | (_) | | | | (_| | |_| | |_ |_| |_|_|\___|_| \___/|_| |_|\__,_|\__,_|\__| Micronaut (v3.6.3)
>
> 12:49:25.382 [main] DEBUG i.m.s.a.AuthenticationModeCondition -
> CookieBasedAuthenticationModeCondition is not fulfilled because
> micronaut.security.authentication is not one of [cookie, idtoken].
> 12:49:25.678 [main] INFO io.micronaut.runtime.Micronaut - Startup
> completed in 1157ms. Server Running: http://localhost:8082
> 12:50:13.738 [default-nioEventLoopGroup-1-2] DEBUG
> i.m.s.a.AuthenticationModeCondition -
> CookieBasedAuthenticationModeCondition is not fulfilled because
> micronaut.security.authentication is not one of [cookie, idtoken].
> 12:50:13.755 [default-nioEventLoopGroup-1-2] DEBUG
> i.m.s.t.reader.HttpHeaderTokenReader - Looking for bearer token in
> Authorization header 12:50:13.755 [default-nioEventLoopGroup-1-2]
> DEBUG i.m.s.t.reader.DefaultTokenResolver - Request GET, /swagger-ui,
> no token found. 12:50:13.759 [default-nioEventLoopGroup-1-2] DEBUG
> i.m.security.rules.IpPatternsRule - One or more of the IP patterns
> matched the host address [0:0:0:0:0:0:0:1]. Continuing request
> processing. 12:50:13.762 [default-nioEventLoopGroup-1-2] DEBUG
> i.m.s.rules.AbstractSecurityRule - The given roles [[isAnonymous()]]
> matched one or more of the required roles [[isAnonymous()]]. Allowing
> the request 12:50:13.762 [default-nioEventLoopGroup-1-2] DEBUG
> i.m.security.filters.SecurityFilter - Authorized request GET
> /swagger-ui. The rule provider
> io.micronaut.security.rules.ConfigurationInterceptUrlMapRule
> authorized the request. 12:50:14.734 [default-nioEventLoopGroup-1-2]
> DEBUG i.m.s.t.reader.HttpHeaderTokenReader - Looking for bearer token
> in Authorization header 12:50:14.734 [default-nioEventLoopGroup-1-2]
> DEBUG i.m.s.t.reader.DefaultTokenResolver - Request GET,
> /swagger/service1-0.0.yml, no token found. 12:50:14.735
> [default-nioEventLoopGroup-1-2] DEBUG
> i.m.security.rules.IpPatternsRule - One or more of the IP patterns
> matched the host address [0:0:0:0:0:0:0:1]. Continuing request
> processing. 12:50:14.736 [default-nioEventLoopGroup-1-2] DEBUG
> i.m.s.rules.InterceptUrlMapRule - No url map pattern exact match found
> for path [/swagger/service1-0.0.yml] and method [GET]. Searching in
> patterns with no defined method. 12:50:14.736
> [default-nioEventLoopGroup-1-2] DEBUG i.m.s.rules.InterceptUrlMapRule
> - Url map pattern found for path [/swagger/service1-0.0.yml]. Comparing roles. 12:50:14.736 [default-nioEventLoopGroup-1-2] DEBUG
> i.m.s.rules.AbstractSecurityRule - The given roles [[isAnonymous()]]
> matched one or more of the required roles [[isAnonymous()]]. Allowing
> the request 12:50:14.736 [default-nioEventLoopGroup-1-2] DEBUG
> i.m.security.filters.SecurityFilter - Authorized request GET
> /swagger/service1-0.0.yml. The rule provider
> io.micronaut.security.rules.ConfigurationInterceptUrlMapRule
> authorized the request. 12:50:38.395 [default-nioEventLoopGroup-1-2]
> DEBUG i.m.s.t.reader.HttpHeaderTokenReader - Looking for bearer token
> in Authorization header 12:50:38.395 [default-nioEventLoopGroup-1-2]
> DEBUG i.m.s.t.reader.DefaultTokenResolver - Token
> eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJwbXMiLCJzdWIiOiJkZW1vLnVzZXIiLCJpYXQiOjE2NjMzNDY5OTcsImV4cCI6MTY2MzM0NzI5NywianRpIjoiOGM1Mzc2ODMtOGM4Yy00MjgyLWFiYWUtMTU2Yzg3MjgzNGZhIn0.SxCAUxryW315Q3WlRSk6PypUh9s6K-Wce3zrB5Hmycs
> found in request GET /hello 12:50:38.418
> [default-nioEventLoopGroup-1-2] DEBUG
> i.m.s.t.jwt.validator.JwtValidator - Validating signed JWT
> 12:50:38.473 [default-nioEventLoopGroup-1-2] DEBUG
> i.m.security.filters.SecurityFilter - Attributes: sub=>demo.user,
> iss=>pms, exp=>Fri Sep 16 12:54:57 EDT 2022, iat=>Fri Sep 16 12:49:57
> EDT 2022, jti=>8c537683-8c8c-4282-abae-156c872834fa 12:50:38.474
> [default-nioEventLoopGroup-1-2] DEBUG
> i.m.security.rules.IpPatternsRule - One or more of the IP patterns
> matched the host address [0:0:0:0:0:0:0:1]. Continuing request
> processing. 12:50:38.474 [default-nioEventLoopGroup-1-2] DEBUG
> i.m.s.rules.AbstractSecurityRule - None of the given roles
> [[isAnonymous(), isAuthenticated()]] matched the required roles [[]].
> Rejecting the request 12:50:38.474 [default-nioEventLoopGroup-1-2]
> DEBUG i.m.security.filters.SecurityFilter - Unauthorized request GET
> /hello. The rule provider
> io.micronaut.security.rules.SecuredAnnotationRule rejected the
> request. <===========--> 85% EXECUTING [6m 23s]
> > :run
> > IDLE
In your Quarkus application your are generating a signed JWT.
ApplicationScoped
#PermitAll
#Path("/login")
class LoginResource {
#Produces(MediaType.TEXT_PLAIN)
#Path("/")
#POST
fun login() : Response {
val token = Jwt
.issuer("pms")
.subject("demo.user")
.sign()
return Response.ok(token).build()
}
}
In your Quarks application properties you use a Public/Private key pair to sign the JWT.
quarkus.http.port=8081
smallrye.jwt.sign.key.location=/Users/bala/labs/auth-service/jwt/privateKey.pem
mp.jwt.verify.issuer=demo
mp.jwt.verify.publickey.location=/Users/bala/labs/auth-service/jwt/publicKey.pem
But in the Micronaut microservice you are trying to validate the JWT using a secret to validate the JWT.
micronaut:
server:
port: 8082
application:
name: service1
security:
authentication: bearer
basic-auth:
enabled: false # <-- Add this to disable basic auth
token:
jwt:
signatures:
secret:
generator:
secret: ${JWT_GENERATOR_SIGNATURE_SECRET:pleaseChangeThisSecretForANewOne}
Of course this fails.
You have the following two options.
The easy one. Make Quarkus generate signed JWT using a secret.
The other option. Use the Public Key in Micronaut to validate the JWT (see RSASignatureConfiguration)
For option 2) you have to add an addition bean to the application context.
#Singleton
#Named
class MySignatureConfig : RSASignatureConfiguration {
override fun getPublicKey(): RSAPublicKey {
val encoded: ByteArray = Base64.getDecoder().decode("<your public key as a string>")
val keyFactory: KeyFactory = KeyFactory.getInstance("RSA")
val keySpec = X509EncodedKeySpec(encoded)
return keyFactory.generatePublic(keySpec) as RSAPublicKey
}
}
This will help to validate the JWT.
Additionally this is how Micronaut Security works, in the Basic Authentication Flow. Simply exchange AuthenticationFetcher with TokenAuthenticationFetcher. The helps debugging the flow.

Problem when querying Raw Data with STH-Comet - Returns empty

I have Orion, Cygnus and STH-Comet(installed and configured in formal mode). Each component is in a container docker. I implemented the infrastructure with docker-compose.yml.
The Cygnus container is configured as follows:
image: fiware/cygnus-ngsi:latest
hostname: cygnus
container_name: cygnus
volumes:
- /home/ubuntu/cygnus/multisink_agent.conf:/opt/fiware-cygnus/docker/cygnus-ngsi/multisink_agent.conf
depends_on:
- mongo
networks:
- default
expose:
- "5050"
- "5080"
ports:
- "5050:5050"
- "5080:5080"
environment:
- CYGNUS_SERVICE_PORT=5050
- CYGNUS_MONITORING_TYPE=http
- CYGNUS_AGENT_NAME=cygnus-ngsi
- CYGNUS_MONGO_SERVICE_PORT=5050
- CYGNUS_MONGO_HOSTS=mongo:27017
- CYGNUS_MONGO_USER=
- CYGNUS_MONGO_PASS=
- CYGNUS_MONGO_ENABLE_ENCODING=false
- CYGNUS_MONGO_ENABLE_GROUPING=false
- CYGNUS_MONGO_ENABLE_NAME_MAPPINGS=false
- CYGNUS_MONGO_DATA_MODEL=dm-by-entity
- CYGNUS_MONGO_ATTR_PERSISTENCE=column
- CYGNUS_MONGO_DB_PREFIX=sth_
- CYGNUS_MONGO_COLLECTION_PREFIX=sth_
- CYGNUS_MONGO_ENABLE_LOWERCASE=false
- CYGNUS_MONGO_BATCH_TIMEOUT=30
- CYGNUS_MONGO_BATCH_TTL=10
- CYGNUS_MONGO_DATA_EXPIRATION=0
- CYGNUS_MONGO_COLLECTIONS_SIZE=0
- CYGNUS_MONGO_MAX_DOCUMENTS=0
- CYGNUS_MONGO_BATCH_SIZE=1
- CYGNUS_LOG_LEVEL=DEBUG
- CYGNUS_SKIP_CONF_GENERATION=false
- CYGNUS_STH_ENABLE_ENCODING=false
- CYGNUS_STH_ENABLE_GROUPING=false
- CYGNUS_STH_ENABLE_NAME_MAPPINGS=false
- CYGNUS_STH_DB_PREFIX=sth_
- CYGNUS_STH_COLLECTION_PREFIX=sth_
- CYGNUS_STH_DATA_MODEL=dm-by-entity
- CYGNUS_STH_ENABLE_LOWERCASE=false
- CYGNUS_STH_BATCH_TIMEOUT=30
- CYGNUS_STH_BATCH_TTL=10
- CYGNUS_STH_DATA_EXPIRATION=0
- CYGNUS_STH_BATCH_SIZE=1
Obs: In the multisink_agent.conf file I changed the service and the servicepath:
cygnus-ngsi.sources.http-source-mongo.handler.default_service = tese
cygnus-ngsi.sources.http-source-mongo.handler.default_service_path = /iot
And the STH-Comet container looks like this:
image: fiware/sth-comet:latest
hostname: sth
container_name: sth
depends_on:
- cygnus
- mongo
networks:
- default
expose:
- "8666"
ports:
- "8666:8666"
environment:
- STH_HOST=0.0.0.0
- STH_PORT=8666
- DB_URI=mongo:27017
- DB_USERNAME=
- DB_PASSWORD=
- LOGOPS_LEVEL=DEBUG
In the STH-Comet config.js file I enabled CORS and I changed the defaultService and the defaultServicePath. The file looks like this:
var config = {};
// STH server configuration
//--------------------------
config.server = {
host: 'localhost',
port: '8666',
// Default value: "testservice".
defaultService: 'tese',
// Default value: "/testservicepath".
defaultServicePath: '/iot',
filterOutEmpty: 'true',
aggregationBy: ['day', 'hour', 'minute'],
temporalDir: 'temp',
maxPageSize: '100'
};
// Cors Configuration
config.cors = {
// The enabled is use to set CORS policy
enabled: 'true',
options: {
origin: ['*'],
headers: [
'Access-Control-Allow-Origin',
'Access-Control-Allow-Headers',
'Access-Control-Request-Headers',
'Origin, Referer, User-Agent'
],
additionalHeaders: ['fiware-servicepath', 'fiware-service'],
credentials: 'true'
}
};
// Database configuration
//------------------------
config.database = {
dataModel: 'collection-per-entity',
user: '',
password: '',
authSource: '',
URI: 'localhost:27017',
replicaSet: '',
prefix: 'sth_',
collectionPrefix: 'sth_',
poolSize: '5',
writeConcern: '1',
shouldStore: 'both',
truncation: {
expireAfterSeconds: '0',
size: '0',
max: '0'
},
ignoreBlankSpaces: 'true',
nameMapping: {
enabled: 'false',
configFile: './name-mapping.json'
},
nameEncoding: 'false'
};
// Logging configuration
//------------------------
config.logging = {
level: 'debug',
format: 'pipe',
proofOfLifeInterval: '60',
processedRequestLogStatisticsInterval: '60'
};
module.exports = config;
I use Cygnus to persist historical data. STH-Comet is used only to query raw and aggregated data.
Cygnus' signature on Orion did this:
"description": "A subscription All Entities",
"subject": {
"entities": [
{
"idPattern": ".*"
}
],
"condition": {
"attrs": []
}
},
"notification": {
"http": {
"url": "http://cygnus:5050/notify"
},
"attrs": [],
"attrsFormat":"legacy"
},
"expires": "2040-01-01T14:00:00.00Z",
"throttling": 5
}
The headers used for fiware-service and fiware-servicepath are:
Fiware-service: tese
Fiware-servicepath: /iot
The entities data are stored in orion-tese. I have the collection: entities
{
"_id" : {
"id" : "Tank1",
"type" : "Tank",
"servicePath" : "/iot"
},
"attrNames" : [
"temperature"
],
"attrs" : {
"temperature" : {
"value" : 0.333,
"type" : "Float",
"mdNames" : [ ],
"creDate" : 1594334464,
"modDate" : 1594337770
}
},
"creDate" : 1594334464,
"modDate" : 1594337771,
"lastCorrelator" : "f86d0d74-c23c-11ea-9c82-0242ac1c0005"
}
The raw and aggregated data are stored in sth_tese.
I have the collections:
sth_/iot_Tank1_Tank.aggr
and
sth_/iot_Tank1_Tank
The sth_/iot_Tank1_Tank raw data is in mongoDB:
{
"_id" : ObjectId("5f079d0369591c06b0fc981a"),
"temperature" : 279,
"recvTime" : ISODate("2020-07-09T22:41:05.670Z")
}
{
"_id" : ObjectId("5f07a9eb69591c06b0fc981b"),
"temperature" : 0.333,
"recvTime" : ISODate("2020-07-09T23:36:11.160Z")
}
When I run: http://localhost:8666/STH/v1/contextEntities/type/Tank/id/Tank1/attributes/temperature?aggrMethod=sum&aggrPeriod=minute
or
http://localhost:8666/STH/v2/entities/Tank1/attrs/temperature?type=Tank&aggrMethod=sum&aggrPeriod=minute
I have the result: "sum": 279 and "sum": 0.333. I can recover ALL the aggregated data, max, min, sum, sum2.
The difficulty is with the STH-Comet when I try to retrieve the raw data, the return code is 200 and the value returns empty.
I've tried with APIs v1 and v2, to no avail.
request with v2:
http://sth:8666/STH/v2/entities/Tank1/attrs/temperature?type=Tank&lastN=10
Return
{
"type": "StructuredValue",
"value": []
}
request with v1:
http://sth:8666/STH/v1/contextEntities/type/Tank/id/Tank1/attributes/temperature?lastN=10
Return
{
"contextResponses": [{
"contextElement": {
"attributes": [{
"name": "temperature",
"values": []
}],
"id": "Tank1",
"isPattern": false,
"type": "Tank"
},
"statusCode": {
"code": "200",
"reasonPhrase": "OK"
}
}]
}
The STH-Comet log shows that it is online and connects to the correct database:
time=2020-07-09T22:39:06.698Z | lvl=INFO | corr=n/a | trans=n/a | op=OPER_STH_DB_CONN_OPEN | from=n/a | srv=n/a | subsrv=n/a | comp=STH | msg=Establishing connection to the database at mongodb://#mongo:27017/sth_tese
time=2020-07-09T22:39:06.879Z | lvl=INFO | corr=n/a | trans=n/a | op=OPER_STH_DB_CONN_OPEN | from=n/a | srv=n/a | subsrv=n/a | comp=STH | msg=Connection successfully established to the database at mongodb://#mongo:27017/sth_tese
time=2020-07-09T22:39:07.218Z | lvl=INFO | corr=n/a | trans=n/a | op=OPER_STH_SERVER_START | from=n/a | srv=n/a | subsrv=n/a | comp=STH | msg=Server started at http://0.0.0.0:8666
The STH-Comet log with the api v2 request:
time=2020-07-09T23:46:47.400Z | lvl=DEBUG | corr=998811d9-fac2-4701-b37c-bb9ae1b45b81 | trans=998811d9-fac2-4701-b37c-bb9ae1b45b81 | op=OPER_STH_GET | from=n/a | srv=tese | subsrv=/iot | comp=STH | msg=GET /STH/v2/entities/Tank1/attrs/temperature?type=Tank&lastN=10
time=2020-07-09T23:46:47.404Z | lvl=DEBUG | corr=998811d9-fac2-4701-b37c-bb9ae1b45b81 | trans=998811d9-fac2-4701-b37c-bb9ae1b45b81 | op=OPER_STH_GET | from=n/a | srv=tese | subsrv=/iot | comp=STH | msg=Getting access to the raw data collection for retrieval...
time=2020-07-09T23:46:47.408Z | lvl=DEBUG | corr=998811d9-fac2-4701-b37c-bb9ae1b45b81 | trans=998811d9-fac2-4701-b37c-bb9ae1b45b81 | op=OPER_STH_GET | from=n/a | srv=tese | subsrv=/iot | comp=STH | msg=The raw data collection for retrieval exists
time=2020-07-09T23:46:47.412Z | lvl=DEBUG | corr=998811d9-fac2-4701-b37c-bb9ae1b45b81 | trans=998811d9-fac2-4701-b37c-bb9ae1b45b81 | op=OPER_STH_GET | from=n/a | srv=tese | subsrv=/iot | comp=STH | msg=No raw data available for the request: /STH/v2/entities/Tank1/attrs/temperature?type=Tank&lastN=10
time=2020-07-09T23:46:47.412Z | lvl=DEBUG | corr=998811d9-fac2-4701-b37c-bb9ae1b45b81 | trans=998811d9-fac2-4701-b37c-bb9ae1b45b81 | op=OPER_STH_GET | from=n/a | srv=tese | subsrv=/iot | comp=STH | msg=Responding with no points
According to the log, it establishes the connection to recover the raw data: msg=Getting access to the raw data collection for retrieval.... Confirms that the raw data exists: msg=The raw data collection for retrieval exists. But, it cannot recover this data and generates the message that the raw data is not available and does not return any points:msg=No raw data available for the request and msg=Responding with no points.
I already read the configuration part in the documentation. I've reinstalled everything, several times. I combed all settings and I can't find anything to justify this problem.
What could it be?
Could someone with expertise in STH-Comet give any guidance?
Thanks!
Sometimes the way in which STH tries to recover information doesn't match to the way in wich Cygnus store it. However, that doesn't to be the case here. The datamodel used by STH is configured with config.database.dataModel and it seems to be correct: collection-per-entity (as you have collections like sth_/iot_Tank1_Tank, which correspondds to a single entity, i.e. the one with id Tank1 and type Tank).
Assuming that the setting in config.js is not being overridden by DATA_MODEL env var (although it would be wise to check that, looking to the env vars actuallly inyected to the docker container running STH, I guess that with docker inspect) the only way I think we can continue debugging is to inspect which actual query does STH on MongoDB to end in No raw data available for the request.
MongoDB has a profiler that allows to record every query done in the DB. Thus the procedure would be as follows:
Avoid (or minimize) any other usage of MongoDB instance, to avoid "noise" in the information recorded by the profiler
Start the profiler in "all queries" mode (i.e. profiling level 2)
Do the query at STH API
Stop the profiler
Check the queries recorded by the profiler as a consequence of the request done in step 3
Explaining the usage of the MongoDB profiler is out of the scope of this answer, but the reference I provided above is a good starting point if you don't know it already.
Once you have information about the queries, please provide feedback as comments to this answers. Thanks!

Error Parsing HTTP Response Using Apama HTTP Client plugin

I am trying to do an HTTP call to a REST API using Apama HTTP Client plugin. I am able to send a request to the REST Resource, but while parsing the response, I am getting the below error.
WARN [20176] - Failed to parse the event "`{contentType:application/json,sag.type:apamax.httpserversample.HTTPResponse,http:{headers:{contentLength:50,content-type:application/json,content-length:50},statusCode:200,method:POST,path:/rest/POC_422837/WS/provider/apamaTestConn,cookies:{},statusReason:OK}}{body:{status:Hello Apama. How are you doing?}}"
from httpClient due to the error: Unable to parse event apamax.httpserversample.HTTPResponse:
Unable to parse string from the map '{status:Hello Apama. How are you doing?}':
Invalid datatype, could not cast to string`
The YAML Config file looks as below,
connectivityPlugins:
HTTPClientGenericTransport:
libraryName: connectivity-http-client
class: HTTPClient
startChains:
httpClient:
- apama.eventMap
- mapperCodec:
apamax.httpserversample.HTTPRequest:
towardsTransport:
mapFrom:
- metadata.http.path: payload.path
- metadata.requestId: payload.id
- metadata.http.method: payload.method
- payload: payload.data
defaultValue:
- metadata.contentType: application/json
- metadata.sag.type: HelloWorld
apamax.httpserversample.HTTPResponse:
towardsHost:
mapFrom:
- payload.body: payload
- payload.id: metadata.requestId
apamax.httpserversample.HTTPError:
towardsHost:
mapFrom:
- payload.id: metadata.requestId
- payload.code: metadata.http.statusCode
- payload.message: metadata.http.statusReason
- classifierCodec:
rules:
- apamax.httpserversample.HTTPResponse:
- metadata.http.statusCode: 200
- apamax.httpserversample.HTTPError:
- metadata.http.statusCode:
- jsonCodec:
filterOnContentType: true
- stringCodec
- HTTPClientGenericTransport:
host: ${CORRELATOR_HOST}
port: ${CORRELATOR_PORT}
Please help.
I believe the problem is that you map in the config
apamax.httpserversample.HTTPResponse:
towardsHost:
mapFrom:
- payload.body: payload
- payload.id: metadata.requestId
the payload of the response to HTTPResponse.body.
However, as you can see from the warning, the payload is actually a map, so you need to do
- payload.body: payload.status

Error initializing the application: No datastore implementation specified Message: No datastore implementation specified

I want to use the 'ElasticSearch' plug-in in Grails2.5 with the MongoDB. My 'BuildConfig.groovy' file is-:
grails.servlet.version = "3.0" // Change depending on target container compliance (2.5 or 3.0)
grails.project.class.dir = "target/classes"
grails.project.test.class.dir = "target/test-classes"
grails.project.test.reports.dir = "target/test-reports"
grails.project.work.dir = "target/work"
grails.project.target.level = 1.6
grails.project.source.level = 1.6
//grails.project.war.file = "target/${appName}-${appVersion}.war"
grails.project.fork = [
// configure settings for compilation JVM, note that if you alter the Groovy version forked compilation is required
// compile: [maxMemory: 256, minMemory: 64, debug: false, maxPerm: 256, daemon:true],
// configure settings for the test-app JVM, uses the daemon by default
test: [maxMemory: 768, minMemory: 64, debug: false, maxPerm: 256, daemon:true],
// configure settings for the run-app JVM
run: [maxMemory: 768, minMemory: 64, debug: false, maxPerm: 256, forkReserve:false],
// configure settings for the run-war JVM
war: [maxMemory: 768, minMemory: 64, debug: false, maxPerm: 256, forkReserve:false],
// configure settings for the Console UI JVM
console: [maxMemory: 768, minMemory: 64, debug: false, maxPerm: 256]
]
grails.project.dependency.resolver = "maven" // or ivy
grails.project.dependency.resolution =
{
// inherit Grails' default dependencies
inherits("global") {
// specify dependency exclusions here; for example, uncomment this to disable ehcache:
// excludes 'ehcache'
}
log "error" // log level of Ivy resolver, either 'error', 'warn', 'info', 'debug' or 'verbose'
checksums true // Whether to verify checksums on resolve
legacyResolve false // whether to do a secondary resolve on plugin installation, not advised and here for backwards compatibility
repositories {
inherits true // Whether to inherit repository definitions from plugins
grailsPlugins()
grailsHome()
mavenLocal()
grailsCentral()
mavenCentral()
// uncomment these (or add new ones) to enable remote dependency resolution from public Maven repositories
//mavenRepo "http://repository.codehaus.org"
//mavenRepo "http://download.java.net/maven/2/"
//mavenRepo "http://repository.jboss.com/maven2/"
}
dependencies {
// specify dependencies here under either 'build', 'compile', 'runtime', 'test' or 'provided' scopes e.g.
// runtime 'mysql:mysql-connector-java:5.1.29'
// runtime 'org.postgresql:postgresql:9.3-1101-jdbc41'
//runtime "org.elasticsearch:elasticsearch:0.90.3"
//runtime "org.elasticsearch:elasticsearch-lang-groovy:1.5.0"
test "org.grails:grails-datastore-test-support:1.0.2-grails-2.4"
}
plugins {
// plugins for the build system only
build ":tomcat:7.0.55.2" // or ":tomcat:8.0.20"
// plugins for the compile step
compile ":scaffolding:2.1.2"
compile ':cache:1.1.8'
compile ":asset-pipeline:2.1.5"
compile ':mongodb:3.0.3'
// plugins needed at runtime but not for compilation
//runtime ":hibernate4:4.3.8.1" // or ":hibernate:3.6.10.18"
runtime ":database-migration:1.4.0"
runtime ":jquery:1.11.1"
runtime ":elasticsearch:0.0.4.4"
// Uncomment these to enable additional asset-pipeline capabilities
//compile ":sass-asset-pipeline:1.9.0"
//compile ":less-asset-pipeline:1.10.0"
//compile ":coffee-asset-pipeline:1.8.0"
//compile ":handlebars-asset-pipeline:1.3.0.3"
}
}
Also my 'Config.groovy' file is-:
// locations to search for config files that get merged into the main config;
// config files can be ConfigSlurper scripts, Java properties files, or classes
// in the classpath in ConfigSlurper format
// grails.config.locations = [ "classpath:${appName}-config.properties",
// "classpath:${appName}-config.groovy",
// "file:${userHome}/.grails/${appName}-config.properties",
// "file:${userHome}/.grails/${appName}-config.groovy"]
// if (System.properties["${appName}.config.location"]) {
// grails.config.locations << "file:" + System.properties["${appName}.config.location"]
// }
grails.project.groupId = appName // change this to alter the default package name and Maven publishing destination
// The ACCEPT header will not be used for content negotiation for user agents containing the following strings (defaults to the 4 major rendering engines)
grails.mime.disable.accept.header.userAgents = ['Gecko', 'WebKit', 'Presto', 'Trident']
grails.mime.types = [ // the first one is the default format
all: '*/*', // 'all' maps to '*' or the first available format in withFormat
atom: 'application/atom+xml',
css: 'text/css',
csv: 'text/csv',
form: 'application/x-www-form-urlencoded',
html: ['text/html','application/xhtml+xml'],
js: 'text/javascript',
json: ['application/json', 'text/json'],
multipartForm: 'multipart/form-data',
rss: 'application/rss+xml',
text: 'text/plain',
hal: ['application/hal+json','application/hal+xml'],
xml: ['text/xml', 'application/xml']
]
// URL Mapping Cache Max Size, defaults to 5000
//grails.urlmapping.cache.maxsize = 1000
// Legacy setting for codec used to encode data with ${}
grails.views.default.codec = "html"
// The default scope for controllers. May be prototype, session or singleton.
// If unspecified, controllers are prototype scoped.
grails.controllers.defaultScope = 'singleton'
// GSP settings
grails {
views {
gsp {
encoding = 'UTF-8'
htmlcodec = 'xml' // use xml escaping instead of HTML4 escaping
codecs {
expression = 'html' // escapes values inside ${}
scriptlet = 'html' // escapes output from scriptlets in GSPs
taglib = 'none' // escapes output from taglibs
staticparts = 'none' // escapes output from static template parts
}
}
// escapes all not-encoded output at final stage of outputting
// filteringCodecForContentType.'text/html' = 'html'
}
}
grails.converters.encoding = "UTF-8"
// scaffolding templates configuration
grails.scaffolding.templates.domainSuffix = 'Instance'
// Set to false to use the new Grails 1.2 JSONBuilder in the render method
grails.json.legacy.builder = false
// enabled native2ascii conversion of i18n properties files
grails.enable.native2ascii = true
// packages to include in Spring bean scanning
grails.spring.bean.packages = []
// whether to disable processing of multi part requests
grails.web.disable.multipart=false
// request parameters to mask when logging exceptions
grails.exceptionresolver.params.exclude = ['password']
// configure auto-caching of queries by default (if false you can cache individual queries with 'cache: true')
grails.hibernate.cache.queries = false
// configure passing transaction's read-only attribute to Hibernate session, queries and criterias
// set "singleSession = false" OSIV mode in hibernate configuration after enabling
grails.hibernate.pass.readonly = false
// configure passing read-only to OSIV session by default, requires "singleSession = false" OSIV mode
grails.hibernate.osiv.readonly = false
environments {
development {
grails.logging.jul.usebridge = true
}
production {
grails.logging.jul.usebridge = false
// TODO: grails.serverURL = "http://www.changeme.com"
}
}
// log4j configuration
log4j.main = {
// Example of changing the log pattern for the default console appender:
//
//appenders {
// console name:'stdout', layout:pattern(conversionPattern: '%c{2} %m%n')
//}
error 'org.codehaus.groovy.grails.web.servlet', // controllers
'org.codehaus.groovy.grails.web.pages', // GSP
'org.codehaus.groovy.grails.web.sitemesh', // layouts
'org.codehaus.groovy.grails.web.mapping.filter', // URL mapping
'org.codehaus.groovy.grails.web.mapping', // URL mapping
'org.codehaus.groovy.grails.commons', // core / classloading
'org.codehaus.groovy.grails.plugins', // plugins
'org.codehaus.groovy.grails.orm.hibernate', // hibernate integration
'org.springframework',
'org.hibernate',
'net.sf.ehcache.hibernate'
}
/****************************** adding from the site *******************************/
elasticSearch {
/**
* Date formats used by the unmarshaller of the JSON responses
*/
elasticSearch.datastoreImpl="mongodbDatastore"
date.formats = ["yyyy-MM-dd'T'HH:mm:ss'Z'"]
/**
* Hosts for remote ElasticSearch instances.
* Will only be used with the "transport" client mode.
* If the client mode is set to "transport" and no hosts are defined, ["localhost", 9300] will be used by default.
*/
client.hosts = [
[host:'localhost', port:9300]
]
/**
* Default mapping property exclusions
*
* No properties matching the given names will be mapped by default
* ie, when using "searchable = true"
*
* This does not apply for classes using mapping by closure
*/
defaultExcludedProperties = ["password"]
/**
* Determines if the plugin should reflect any database save/update/delete automatically
* on the ES instance. Default to false.
*/
disableAutoIndex = false
/**
* Should the database be indexed at startup.
*
* The value may be a boolean true|false.
* Indexing is always asynchronous (compared to Searchable plugin) and executed after BootStrap.groovy.
*/
bulkIndexOnStartup = true
/**
* Max number of requests to process at once. Reduce this value if you have memory issue when indexing a big amount of data
* at once. If this setting is not specified, 500 will be use by default.
*/
maxBulkRequest = 500
/**
* The name of the ElasticSearch mapping configuration property that annotates domain classes. The default is 'searchable'.
*/
searchableProperty.name = 'searchable'
}
environments {
development {
/**
* Possible values : "local", "node", "dataNode", "transport"
* If set to null, "node" mode is used by default.
*/
elasticSearch.client.mode = 'local'
}
test {
elasticSearch {
client.mode = 'local'
index.store.type = 'memory' // store local node in memory and not on disk
}
}
production {
elasticSearch.client.mode = 'node'
}
}
But wile running the app , the following error is coming-:
|Loading Grails 2.5.0
|Configuring classpath
.
|Environment set to development
.................................
|Packaging Grails application
....
|Compiling 1 source files
...................................
|Running Grails application
Error |
2015-06-01 16:40:59,813 [localhost-startStop-1] ERROR context.GrailsContextLoaderListener - Error initializing the application: No datastore implementation specified
Message: No datastore implementation specified
Line | Method
->> 135 | doCall in ElasticsearchGrailsPlugin$_closure1
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
| 754 | invokeBeanDefiningClosure in grails.spring.BeanBuilder
| 584 | beans . . . . . . . . . . in ''
| 527 | invokeMethod in ''
| 266 | run . . . . . . . . . . . in java.util.concurrent.FutureTask
| 1142 | runWorker in java.util.concurrent.ThreadPoolExecutor
| 617 | run . . . . . . . . . . . in java.util.concurrent.ThreadPoolExecutor$Worker
^ 745 | run in java.lang.Thread
Error |
Forked Grails VM exited with errorJava HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
|Loading Grails 2.5.0
|Configuring classpath
.
|Environment set to development
.................................
|Packaging Grails application
....
|Compiling 1 source files
...................................
|Running Grails application
Error |
2015-06-01 16:40:59,813 [localhost-startStop-1] ERROR context.GrailsContextLoaderListener - Error initializing the application: No datastore implementation specified
Message: No datastore implementation specified
Line | Method
->> 135 | doCall in ElasticsearchGrailsPlugin$_closure1
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
| 754 | invokeBeanDefiningClosure in grails.spring.BeanBuilder
| 584 | beans . . . . . . . . . . in ''
| 527 | invokeMethod in ''
| 266 | run . . . . . . . . . . . in java.util.concurrent.FutureTask
| 1142 | runWorker in java.util.concurrent.ThreadPoolExecutor
| 617 | run . . . . . . . . . . . in java.util.concurrent.ThreadPoolExecutor$Worker
^ 745 | run in java.lang.Thread
Error |
Forked Grails VM exited with errorJava HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
I suspect you have to changes this line
elasticSearch.datastoreImpl="mongodbDatastore"
to
datastoreImpl="mongodbDatastore"
as you're already nested in elasticSearch namespace.

grails 2.3.7 connect to remote postgresql server

In my BuildConfig.groovy
I have :
dependencies {
runtime 'org.postgresql:postgresql:9.3-1100-jdbc41'
}
In my DataSource.groovy
I have :
dataSource {
pooled = true
driverClassName = "org.postgresql.Driver"
dialect=org.hibernate.dialect.PostgreSQLDialect
hibernate {
cache.use_second_level_cache=true
cache.use_query_cache=true
cache.region.factory_class = 'net.sf.ehcache.hibernate.EhCacheRegionFactory'
}
// environment specific settings
environments {
development {
dataSource {
dbCreate = "create-drop" // one of 'create', 'create-drop','update'
url = "jdbc:postgresql://ip:5432/security_dev"
username = "uname"
password = "pwd"
}
}
test {
dataSource {
dbCreate = "create-drop" // one of 'create', 'create-drop','update'
url = "jdbc:postgresql://ip:5432/security_dev"
username = "uname"
password = "pwd"
}
}
production {
dataSource {
dbCreate = "update" // one of 'create', 'create-drop','update'
url = "jdbc:postgresql://ip:5432/security_dev"
username = "uname"
password = "pwd"
}
}
}
}
Here are the error message
2014-04-08 15:02:48,390 [localhost-startStop-1] ERROR pool.ConnectionPool - Unable to create initial connections of pool.
Message: Driver:org.postgresql.Driver#afd862b returned null for URL:jdbc:h2:mem:grailsDB;MVCC=TRUE;LOCK_TIMEOUT=10000
Line | Method
->> 262 | run in java.util.concurrent.FutureTask
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
| 1145 | runWorker in java.util.concurrent.ThreadPoolExecutor
| 615 | run . . . in java.util.concurrent.ThreadPoolExecutor$Worker
^ 744 | run in java.lang.Thread
Error |
2014-04-08 15:02:48,708 [localhost-startStop-1] ERROR pool.ConnectionPool - Unable to create initial connections of pool.
Message: Driver:org.postgresql.Driver#30535975 returned null for URL:jdbc:h2:mem:grailsDB;MVCC=TRUE;LOCK_TIMEOUT=10000
Line | Method
->> 262 | run in java.util.concurrent.FutureTask
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
| 1145 | runWorker in java.util.concurrent.ThreadPoolExecutor
| 615 | run . . . in java.util.concurrent.ThreadPoolExecutor$Worker
^ 744 | run in java.lang.Thread
Error |
2014-04-08 15:02:48,723 [localhost-startStop-1] ERROR pool.ConnectionPool - Unable to create initial connections of pool.
Message: Driver:org.postgresql.Driver#563105a6 returned null for URL:jdbc:h2:mem:grailsDB;MVCC=TRUE;LOCK_TIMEOUT=10000
Line | Method
->> 262 | run in java.util.concurrent.FutureTask
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
| 1145 | runWorker in java.util.concurrent.ThreadPoolExecutor
| 615 | run . . . in java.util.concurrent.ThreadPoolExecutor$Worker
^ 744 | run in java.lang.Thread
|Server running. Browse to http://localhost:8080/Postgresql_Grails_2.3.7
This configuration works for grails 2.2.4
What I have to do to make it work under grails 2.3.7?
Thanks in advance
I had the same problem after my upgrade. This is my dependencies (jdbc4 and not jdbc41):
dependencies {
runtime 'org.postgresql:postgresql:9.3-1100-jdbc4'
}
And I don't know if it is the problem, but I think you left '}' before hibernate:
dataSource {
pooled = true
driverClassName = "org.postgresql.Driver"
username = "username"
password = "password"
}
hibernate {
cache.use_second_level_cache = true
cache.use_query_cache = false
cache.region.factory_class = 'net.sf.ehcache.hibernate.EhCacheRegionFactory'
singleSession = true
}