Keycloak JS Policy $evaluation is Undefined - keycloak

I am creating a JS custom policy for keycloak following the guide:
https://www.keycloak.org/docs/latest/authorization_services/index.html#_policy_js
Unfortunately in my script all variables are undefined for some reaseon.
js
var context = $evaluation.context;
var identity = context.identity;
var permission = $evaluation.permission;
var resource = permission.resource;
var attributes = identity.getAttributes();
print('**** evaluation ' + JSON.stringify($evaluation));
print('**** context ' + JSON.stringify(context));
print('**** identity ' + JSON.stringify(identity));
print('**** attributes ' + JSON.stringify(attributes));
if (attributes.owner == identity.id) {
$evaluation.grant();
}
Keycloak logs:
keycloak-authorization-keycloak-1 | 2022-12-30 14:52:46,319 WARN [org.keycloak.connections.httpclient.DefaultHttpClientFactory] (executor-thread-2) TruststoreProvider is disabled
keycloak-authorization-keycloak-1 | 2022-12-30 14:52:48,087 WARN [org.keycloak.services.managers.AuthenticationManager] (executor-thread-1) Required action provider factory 'CONFIGURE_RECOVERY_AUTHN_CODES' configured in the realm 'myrealm' is not available. Provider not found or feature is disabled.
keycloak-authorization-keycloak-1 | 2022-12-30 14:52:48,088 WARN [org.keycloak.services.managers.AuthenticationManager] (executor-thread-1) Required action provider factory 'UPDATE_EMAIL' configured in the realm 'myrealm' is not available. Provider not found or feature is disabled.
keycloak-authorization-keycloak-1 | **** evaluation undefined
keycloak-authorization-keycloak-1 | **** context undefined
keycloak-authorization-keycloak-1 | **** identity undefined
keycloak-authorization-keycloak-1 | **** attributes undefined
services:
keycloak:
image: quay.io/keycloak/keycloak:latest
volumes:
- keycloak:/opt/keycloak/data
- ./js-policies/target/js-policies.jar:/opt/keycloak/providers/js-policies.jar
ports:
- 9000:8080
environment:
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: admin
command:
- start-dev

It turns out the evaluation context is not null - it was JSON.stringify that was returning null for some reason.
I suppose it is due to a limitation of the Java Nashorn Engine.
If you are curious - the $evaluation gets mapped to the class DefaultEvaluation. From there on you can deduce all possible proeprties and values you can use in your policy by consulting the javadoc in the bellow link.
https://www.keycloak.org/docs-api/18.0/javadocs/org/keycloak/authorization/policy/evaluation/DefaultEvaluation.html

Related

Error: failed to connect to database: password authentication failed in Rust

I am trying to connect to database in Rust using sqlx crate and Postgres database.
main.rs:
use dotenv;
use sqlx::Pool;
use sqlx::PgPool;
use sqlx::query;
#[async_std::main]
async fn main() -> Result<(), Error> {
dotenv::dotenv().ok();
pretty_env_logger::init();
let url = std::env::var("DATABASE_URL").unwrap();
dbg!(url);
let db_url = std::env::var("DATABASE_URL")?;
let db_pool: PgPool = Pool::new(&db_url).await?;
let rows = query!("select 1 as one").fetch_one(&db_pool).await?;
dbg!(rows);
let mut app = tide::new();
app.at("/").get(|_| async move {Ok("Hello Rustacean!")});
app.listen("127.0.0.1:8080").await?;
Ok(())
}
#[derive(thiserror::Error, Debug)]
enum Error {
#[error(transparent)]
DbError(#[from] sqlx::Error),
#[error(transparent)]
IoError(#[from] std::io::Error),
#[error(transparent)]
VarError(#[from] std::env::VarError),
}
Here is my .env file:
DATABASE_URL=postgres://localhost/twitter
RUST_LOG=trace
Error log:
error: failed to connect to database: password authentication failed for user "ayman"
--> src/main.rs:19:16
|
19 | let rows = query!("select 1 as one").fetch_one(&db_pool).await?;
| ^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info)
error: aborting due to previous error
error: could not compile `backend`.
Note:
There exists a database called twitter.
I have include macros for sqlx's dependency
sqlx = {version="0.3.5", features = ["runtime-async-std", "macros", "chrono", "json", "postgres", "uuid"]}
Am I missing some level of authentication for connecting to database? I could not find it in docs for sqlx::Query macro
The reason why it is unable to authenticate is that you must provide credentials before accessing the database
There are two ways to do it
Option 1: Change your URL to contain the credentials - For instance -
DATABASE_URL=postgres://localhost?dbname=mydb&user=postgres&password=postgres
Option 2 Use PgConnectionOptions - For instance
let pool_options = PgConnectOptions::new()
.host("localhost")
.port(5432)
.username("dbuser")
.database("dbtest")
.password("dbpassword");
let pool: PgPool = Pool::<Postgres>::connect_with(pool_options).await?;
Note: The sqlx version that I am using is sqlx = {version="0.5.1"}
For more information refer the docs - https://docs.rs/sqlx/0.5.1/sqlx/postgres/struct.PgConnectOptions.html#method.password
Hope this helps you.

CDC with WSO2 Streaming Integrator and Postgres DB

I am trying to setup Change Data Capture (CDC) between WSO2 Streaming Integrator and a local Postgres DB.
I have added the Postgres Driver (v42.2.5) to SI_HOME/lib and I am able to read data from the database from a Siddhi application.
I am following the CDCWithListeningMode example to implement CDC and I am using pgoutput as the logical decoding plugin. But when I run the application I get the following log.
[2020-04-23_19-02-37_460] INFO {org.apache.kafka.connect.json.JsonConverterConfig} - JsonConverterConfig values:
converter.type = key
schemas.cache.size = 1000
schemas.enable = true
[2020-04-23_19-02-37_461] INFO {org.apache.kafka.connect.json.JsonConverterConfig} - JsonConverterConfig values:
converter.type = value
schemas.cache.size = 1000
schemas.enable = false
[2020-04-23_19-02-37_461] INFO {io.debezium.embedded.EmbeddedEngine$EmbeddedConfig} - EmbeddedConfig values:
access.control.allow.methods =
access.control.allow.origin =
bootstrap.servers = [localhost:9092]
header.converter = class org.apache.kafka.connect.storage.SimpleHeaderConverter
internal.key.converter = class org.apache.kafka.connect.json.JsonConverter
internal.value.converter = class org.apache.kafka.connect.json.JsonConverter
key.converter = class org.apache.kafka.connect.json.JsonConverter
listeners = null
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
offset.flush.interval.ms = 60000
offset.flush.timeout.ms = 5000
offset.storage.file.filename =
offset.storage.partitions = null
offset.storage.replication.factor = null
offset.storage.topic =
plugin.path = null
rest.advertised.host.name = null
rest.advertised.listener = null
rest.advertised.port = null
rest.host.name = null
rest.port = 8083
ssl.client.auth = none
task.shutdown.graceful.timeout.ms = 5000
value.converter = class org.apache.kafka.connect.json.JsonConverter
[2020-04-23_19-02-37_516] INFO {io.debezium.connector.common.BaseSourceTask} - offset.storage = io.siddhi.extension.io.cdc.source.listening.InMemoryOffsetBackingStore
[2020-04-23_19-02-37_517] INFO {io.debezium.connector.common.BaseSourceTask} - database.server.name = localhost_5432
[2020-04-23_19-02-37_517] INFO {io.debezium.connector.common.BaseSourceTask} - database.port = 5432
[2020-04-23_19-02-37_517] INFO {io.debezium.connector.common.BaseSourceTask} - table.whitelist = SweetProductionTable
[2020-04-23_19-02-37_517] INFO {io.debezium.connector.common.BaseSourceTask} - cdc.source.object = 1716717434
[2020-04-23_19-02-37_517] INFO {io.debezium.connector.common.BaseSourceTask} - database.hostname = localhost
[2020-04-23_19-02-37_518] INFO {io.debezium.connector.common.BaseSourceTask} - database.password = ********
[2020-04-23_19-02-37_518] INFO {io.debezium.connector.common.BaseSourceTask} - name = CDCWithListeningModeinsertSweetProductionStream
[2020-04-23_19-02-37_518] INFO {io.debezium.connector.common.BaseSourceTask} - server.id = 6140
[2020-04-23_19-02-37_519] INFO {io.debezium.connector.common.BaseSourceTask} - database.history = io.debezium.relational.history.FileDatabaseHistory
[2020-04-23_19-02-38_103] INFO {io.debezium.connector.postgresql.PostgresConnectorTask} - user 'user_name' connected to database 'db_name' on PostgreSQL 11.5, compiled by Visual C++ build 1914, 64-bit with roles:
role 'user_name' [superuser: false, replication: true, inherit: true, create role: false, create db: false, can log in: true] (Encoded)
[2020-04-23_19-02-38_104] INFO {io.debezium.connector.postgresql.PostgresConnectorTask} - No previous offset found
[2020-04-23_19-02-38_104] INFO {io.debezium.connector.postgresql.PostgresConnectorTask} - Taking a new snapshot of the DB and streaming logical changes once the snapshot is finished...
[2020-04-23_19-02-38_105] INFO {io.debezium.util.Threads} - Requested thread factory for connector PostgresConnector, id = localhost_5432 named = records-snapshot-producer
[2020-04-23_19-02-38_105] INFO {io.debezium.util.Threads} - Requested thread factory for connector PostgresConnector, id = localhost_5432 named = records-stream-producer
[2020-04-23_19-02-38_293] INFO {io.debezium.connector.postgresql.connection.PostgresConnection} - Obtained valid replication slot ReplicationSlot [active=false, latestFlushedLSN=null]
[2020-04-23_19-02-38_704] ERROR {io.siddhi.core.stream.input.source.Source} - Error on 'CDCWithListeningMode'. Connection to the database lost. Error while connecting at Source 'cdc' at 'insertSweetProductionStream'. Will retry in '5 sec'. (Encoded)
io.siddhi.core.exception.ConnectionUnavailableException: Connection to the database lost.
at io.siddhi.extension.io.cdc.source.CDCSource.lambda$connect$1(CDCSource.java:424)
at io.debezium.embedded.EmbeddedEngine.run(EmbeddedEngine.java:793)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.apache.kafka.connect.errors.ConnectException: Cannot create replication connection
at io.debezium.connector.postgresql.connection.PostgresReplicationConnection.(PostgresReplicationConnection.java:87)
at io.debezium.connector.postgresql.connection.PostgresReplicationConnection.(PostgresReplicationConnection.java:38)
at io.debezium.connector.postgresql.connection.PostgresReplicationConnection$ReplicationConnectionBuilder.build(PostgresReplicationConnection.java:362)
at io.debezium.connector.postgresql.PostgresTaskContext.createReplicationConnection(PostgresTaskContext.java:65)
at io.debezium.connector.postgresql.RecordsStreamProducer.(RecordsStreamProducer.java:81)
at io.debezium.connector.postgresql.RecordsSnapshotProducer.(RecordsSnapshotProducer.java:70)
at io.debezium.connector.postgresql.PostgresConnectorTask.createSnapshotProducer(PostgresConnectorTask.java:133)
at io.debezium.connector.postgresql.PostgresConnectorTask.start(PostgresConnectorTask.java:86)
at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:45)
at io.debezium.embedded.EmbeddedEngine.run(EmbeddedEngine.java:677)
... 3 more
Caused by: io.debezium.jdbc.JdbcConnectionException: ERROR: could not access file "decoderbufs": No such file or directory
at io.debezium.connector.postgresql.connection.PostgresReplicationConnection.initReplicationSlot(PostgresReplicationConnection.java:145)
at io.debezium.connector.postgresql.connection.PostgresReplicationConnection.(PostgresReplicationConnection.java:79)
... 12 more
Caused by: org.postgresql.util.PSQLException: ERROR: could not access file "decoderbufs": No such file or directory
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2440)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2183)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:308)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:307)
at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:293)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:270)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:266)
at org.postgresql.replication.fluent.logical.LogicalCreateSlotBuilder.make(LogicalCreateSlotBuilder.java:48)
at io.debezium.connector.postgresql.connection.PostgresReplicationConnection.initReplicationSlot(PostgresReplicationConnection.java:108)
... 13 more
Debezium defaults to decoderbufs plugin - "could not access file "decoderbufs": No such file or directory".
According to this answer, the issue is due to the configuration of decoderbufs plugin.
Details
Postgres - 11.4
siddhi-cdc-io - 2.0.3
Debezium - 0.8.3
How do I configure the embedded debezium engine to use the pgoutput plugin? Will changing this configuration fix the error?
Please help me with this issue. I have not found any resources that can help me.
you either need to update the Debezium to the latest 1.1 version - this will enable you to use pgoutput plugin using plugin.name config option or you need to deploy (and maybe build) decoderbufs.so library to your PostgreSQL database.
I'd recommend the former as 0.8.3 is very old version.
I observed this behavior with PostgreSQL 12 when I tried to do CDC with pgoutput logical decoding output plug-in. It seems like even though I configured the database with pgoutput, the siddhi extension is trying to make the connection using "decoderbufs" as decoding plug-in.
When I tried configuring decoderbufs as the logical decoding output plug-in in the database level, I was able to use siddhi io extension without any issue.
It seems like for now, Siddhi io CDC only supports decoderbufs logical decoding output plug-in with PostgreSQL.

Ejabberd - ejabberd_auth_external:failure:103 External authentication program failed when calling 'check_password'

I already have a schema of users with authentication-key and wanted to do authentication via that. I tried implementing authentication via sql but due to different structure of my schema I was getting error and so I implemented external-authentication method. The technologies and OS used in my application are :
Node.JS
Ejabberd as XMPP server
MySQL Database
React-Native (Front-End)
OS - Ubuntu 18.04
I implemented the external authentication configuration as mentioned in https://docs.ejabberd.im/admin/configuration/#external-script and took php script https://www.ejabberd.im/files/efiles/check_mysql.php.txt as an example. But I am getting the below mentioned error in error.log. In ejabberd.yml I have done following configuration.
...
host_config:
"example.org.co":
auth_method: [external]
extauth_program: "/usr/local/etc/ejabberd/JabberAuth.class.php"
auth_use_cache: false
...
Also, is there any external auth javascript script?
Here is the error.log and ejabberd.log as mentioned below
error.log
2019-03-19 07:19:16.814 [error]
<0.524.0>#ejabberd_auth_external:failure:103 External authentication
program failed when calling 'check_password' for admin#example.org.co:
disconnected
ejabberd.log
2019-03-19 07:19:16.811 [debug] <0.524.0>#ejabberd_http:init:151 S:
[{[<<"api">>],mod_http_api},{[<<"admin">>],ejabberd_web_admin}]
2019-03-19 07:19:16.811 [debug]
<0.524.0>#ejabberd_http:process_header:307 (#Port<0.13811>) http
query: 'POST' <<"/api/register">>
2019-03-19 07:19:16.811 [debug]
<0.524.0>#ejabberd_http:process:394 [<<"api">>,<<"register">>] matches
[<<"api">>]
2019-03-19 07:19:16.811 [info]
<0.364.0>#ejabberd_listener:accept:238 (<0.524.0>) Accepted connection
::ffff:ip -> ::ffff:ip
2019-03-19 07:19:16.814 [info]
<0.524.0>#mod_http_api:log:548 API call register
[{<<"user">>,<<"test">>},{<<"host">>,<<"example.org.co">>},{<<"password">>,<<"test">>}]
from ::ffff:ip
2019-03-19 07:19:16.814 [error]
<0.524.0>#ejabberd_auth_external:failure:103 External authentication
program failed when calling 'check_password' for admin#example.org.co:
disconnected
2019-03-19 07:19:16.814 [debug]
<0.524.0>#mod_http_api:extract_auth:171 Invalid auth data:
{error,invalid_auth}
Any help regarding this topic will be appreciated.
1) Your config about the auth_method looks good.
2) Here is a python script I've used and upgraded to make an external authentication for ejabberd.
#!/usr/bin/python
import sys
from struct import *
import os
def openAuth(args):
(user, server, password) = args
# Implement your interactions with your service / database
# Return True or False
return True
def openIsuser(args):
(user, server) = args
# Implement your interactions with your service / database
# Return True or False
return True
def loop():
switcher = {
"auth": openAuth,
"isuser": openIsuser,
"setpass": lambda(none): True,
"tryregister": lambda(none): False,
"removeuser": lambda(none): False,
"removeuser3": lambda(none): False,
}
data = from_ejabberd()
to_ejabberd(switcher.get(data[0], lambda(none): False)(data[1:]))
loop()
def from_ejabberd():
input_length = sys.stdin.read(2)
(size,) = unpack('>h', input_length)
return sys.stdin.read(size).split(':')
def to_ejabberd(result):
if result:
sys.stdout.write('\x00\x02\x00\x01')
else:
sys.stdout.write('\x00\x02\x00\x00')
sys.stdout.flush()
if __name__ == "__main__":
try:
loop()
except error:
pass
I didn't created the communication with Ejabberd from_ejabberd() and to_ejabberd(), and unfortunately can't find back the sources.

can we connect to public api endpoint instead of local host using dredd tool?

I tried to use a public end point(eg:api.openweathermap.org/data/2.5/weather?lat=35&lon=139) instead of the local host while configuring dredd and ran the command to run the tool.But I am not able to connect to the end point through dredd. It is throwing Error:getaddrINFO EAI_AGAIN .
But when I tried to connect to the endpoint using post man .I am able to connect successfully
There is no difference in calling a local or remote endpoint.
Some remote endpoints have some sort of authorization requirements.
This an example of Dredd calling external endpoint:
dredd.yml configuration file fragment
...
blueprint: doc/api.md
# endpoint: 'http://api-srv:5000'
endpoint: https://private-da275-notes69.apiary-mock.com
As you see, the only change is the endpoint on Dredd configuration file (created using Dredd init).
But, as I mention, sometimes you'll need to provide authorization through the header or query string parameter.
Dreed has hooks that allow you to change things before each transaction, for instance:
You'd like to add the apikey parameter in each URL before executing the request. This code can handle that.
hook.js
// Writing Dredd Hooks In Node.js
// Ref: http://dredd.org/en/latest/hooks-nodejs.html
var hooks = require('hooks');
hooks.beforeEach(function(transaction) {
hooks.log('before each');
// add query parameter to each transaction here
let paramToAdd = 'api-key=23456';
if (transaction.fullPath.indexOf('?') > -1)
transaction.fullPath += '&' + paramToAdd;
else
transaction.fullPath += '?' + paramToAdd;
hooks.log('before each fullpath: ' + transaction.fullPath);
});
Code at Github gist
Save this hook file anywhere in your project an than run Dredd passing the hook file.
dredd --hookfiles=./hoock.js
That's it, after execution the log will show the actual URL used in the request.
info: Configuration './dredd.yml' found, ignoring other arguments.
2018-06-25T16:57:13.243Z - info: Beginning Dredd testing...
2018-06-25T16:57:13.249Z - info: Found Hookfiles: 0=/api/scripts/dredd-hoock.js
2018-06-25T16:57:13.263Z - hook: before each
2018-06-25T16:57:13.264Z - hook: before each fullpath: /notes?api-key=23456
"/notes?api-key=23456"
2018-06-25T16:57:16.095Z - pass: GET (200) /notes duration: 2829ms
2018-06-25T16:57:16.096Z - hook: before each
2018-06-25T16:57:16.096Z - hook: before each fullpath: /notes?api-key=23456
"/notes?api-key=23456"
2018-06-25T16:57:16.788Z - pass: POST (201) /notes duration: 691ms
2018-06-25T16:57:16.788Z - hook: before each
2018-06-25T16:57:16.789Z - hook: before each fullpath: /notes/abcd1234?api-key=23456
"/notes/abcd1234?api-key=23456"
2018-06-25T16:57:17.113Z - pass: GET (200) /notes/abcd1234 duration: 323ms
2018-06-25T16:57:17.114Z - hook: before each
2018-06-25T16:57:17.114Z - hook: before each fullpath: /notes/abcd1234?api-key=23456
"/notes/abcd1234?api-key=23456"
2018-06-25T16:57:17.353Z - pass: DELETE (204) /notes/abcd1234 duration: 238ms
2018-06-25T16:57:17.354Z - hook: before each
2018-06-25T16:57:17.354Z - hook: before each fullpath: /notes/abcd1234?api-key=23456
"/notes/abcd1234?api-key=23456"
2018-06-25T16:57:17.614Z - pass: PUT (200) /notes/abcd1234 duration: 259ms
2018-06-25T16:57:17.615Z - complete: 5 passing, 0 failing, 0 errors, 0 skipped, 5 total
2018-06-25T16:57:17.616Z - complete: Tests took 4372ms

Error initializing the application: No datastore implementation specified Message: No datastore implementation specified

I want to use the 'ElasticSearch' plug-in in Grails2.5 with the MongoDB. My 'BuildConfig.groovy' file is-:
grails.servlet.version = "3.0" // Change depending on target container compliance (2.5 or 3.0)
grails.project.class.dir = "target/classes"
grails.project.test.class.dir = "target/test-classes"
grails.project.test.reports.dir = "target/test-reports"
grails.project.work.dir = "target/work"
grails.project.target.level = 1.6
grails.project.source.level = 1.6
//grails.project.war.file = "target/${appName}-${appVersion}.war"
grails.project.fork = [
// configure settings for compilation JVM, note that if you alter the Groovy version forked compilation is required
// compile: [maxMemory: 256, minMemory: 64, debug: false, maxPerm: 256, daemon:true],
// configure settings for the test-app JVM, uses the daemon by default
test: [maxMemory: 768, minMemory: 64, debug: false, maxPerm: 256, daemon:true],
// configure settings for the run-app JVM
run: [maxMemory: 768, minMemory: 64, debug: false, maxPerm: 256, forkReserve:false],
// configure settings for the run-war JVM
war: [maxMemory: 768, minMemory: 64, debug: false, maxPerm: 256, forkReserve:false],
// configure settings for the Console UI JVM
console: [maxMemory: 768, minMemory: 64, debug: false, maxPerm: 256]
]
grails.project.dependency.resolver = "maven" // or ivy
grails.project.dependency.resolution =
{
// inherit Grails' default dependencies
inherits("global") {
// specify dependency exclusions here; for example, uncomment this to disable ehcache:
// excludes 'ehcache'
}
log "error" // log level of Ivy resolver, either 'error', 'warn', 'info', 'debug' or 'verbose'
checksums true // Whether to verify checksums on resolve
legacyResolve false // whether to do a secondary resolve on plugin installation, not advised and here for backwards compatibility
repositories {
inherits true // Whether to inherit repository definitions from plugins
grailsPlugins()
grailsHome()
mavenLocal()
grailsCentral()
mavenCentral()
// uncomment these (or add new ones) to enable remote dependency resolution from public Maven repositories
//mavenRepo "http://repository.codehaus.org"
//mavenRepo "http://download.java.net/maven/2/"
//mavenRepo "http://repository.jboss.com/maven2/"
}
dependencies {
// specify dependencies here under either 'build', 'compile', 'runtime', 'test' or 'provided' scopes e.g.
// runtime 'mysql:mysql-connector-java:5.1.29'
// runtime 'org.postgresql:postgresql:9.3-1101-jdbc41'
//runtime "org.elasticsearch:elasticsearch:0.90.3"
//runtime "org.elasticsearch:elasticsearch-lang-groovy:1.5.0"
test "org.grails:grails-datastore-test-support:1.0.2-grails-2.4"
}
plugins {
// plugins for the build system only
build ":tomcat:7.0.55.2" // or ":tomcat:8.0.20"
// plugins for the compile step
compile ":scaffolding:2.1.2"
compile ':cache:1.1.8'
compile ":asset-pipeline:2.1.5"
compile ':mongodb:3.0.3'
// plugins needed at runtime but not for compilation
//runtime ":hibernate4:4.3.8.1" // or ":hibernate:3.6.10.18"
runtime ":database-migration:1.4.0"
runtime ":jquery:1.11.1"
runtime ":elasticsearch:0.0.4.4"
// Uncomment these to enable additional asset-pipeline capabilities
//compile ":sass-asset-pipeline:1.9.0"
//compile ":less-asset-pipeline:1.10.0"
//compile ":coffee-asset-pipeline:1.8.0"
//compile ":handlebars-asset-pipeline:1.3.0.3"
}
}
Also my 'Config.groovy' file is-:
// locations to search for config files that get merged into the main config;
// config files can be ConfigSlurper scripts, Java properties files, or classes
// in the classpath in ConfigSlurper format
// grails.config.locations = [ "classpath:${appName}-config.properties",
// "classpath:${appName}-config.groovy",
// "file:${userHome}/.grails/${appName}-config.properties",
// "file:${userHome}/.grails/${appName}-config.groovy"]
// if (System.properties["${appName}.config.location"]) {
// grails.config.locations << "file:" + System.properties["${appName}.config.location"]
// }
grails.project.groupId = appName // change this to alter the default package name and Maven publishing destination
// The ACCEPT header will not be used for content negotiation for user agents containing the following strings (defaults to the 4 major rendering engines)
grails.mime.disable.accept.header.userAgents = ['Gecko', 'WebKit', 'Presto', 'Trident']
grails.mime.types = [ // the first one is the default format
all: '*/*', // 'all' maps to '*' or the first available format in withFormat
atom: 'application/atom+xml',
css: 'text/css',
csv: 'text/csv',
form: 'application/x-www-form-urlencoded',
html: ['text/html','application/xhtml+xml'],
js: 'text/javascript',
json: ['application/json', 'text/json'],
multipartForm: 'multipart/form-data',
rss: 'application/rss+xml',
text: 'text/plain',
hal: ['application/hal+json','application/hal+xml'],
xml: ['text/xml', 'application/xml']
]
// URL Mapping Cache Max Size, defaults to 5000
//grails.urlmapping.cache.maxsize = 1000
// Legacy setting for codec used to encode data with ${}
grails.views.default.codec = "html"
// The default scope for controllers. May be prototype, session or singleton.
// If unspecified, controllers are prototype scoped.
grails.controllers.defaultScope = 'singleton'
// GSP settings
grails {
views {
gsp {
encoding = 'UTF-8'
htmlcodec = 'xml' // use xml escaping instead of HTML4 escaping
codecs {
expression = 'html' // escapes values inside ${}
scriptlet = 'html' // escapes output from scriptlets in GSPs
taglib = 'none' // escapes output from taglibs
staticparts = 'none' // escapes output from static template parts
}
}
// escapes all not-encoded output at final stage of outputting
// filteringCodecForContentType.'text/html' = 'html'
}
}
grails.converters.encoding = "UTF-8"
// scaffolding templates configuration
grails.scaffolding.templates.domainSuffix = 'Instance'
// Set to false to use the new Grails 1.2 JSONBuilder in the render method
grails.json.legacy.builder = false
// enabled native2ascii conversion of i18n properties files
grails.enable.native2ascii = true
// packages to include in Spring bean scanning
grails.spring.bean.packages = []
// whether to disable processing of multi part requests
grails.web.disable.multipart=false
// request parameters to mask when logging exceptions
grails.exceptionresolver.params.exclude = ['password']
// configure auto-caching of queries by default (if false you can cache individual queries with 'cache: true')
grails.hibernate.cache.queries = false
// configure passing transaction's read-only attribute to Hibernate session, queries and criterias
// set "singleSession = false" OSIV mode in hibernate configuration after enabling
grails.hibernate.pass.readonly = false
// configure passing read-only to OSIV session by default, requires "singleSession = false" OSIV mode
grails.hibernate.osiv.readonly = false
environments {
development {
grails.logging.jul.usebridge = true
}
production {
grails.logging.jul.usebridge = false
// TODO: grails.serverURL = "http://www.changeme.com"
}
}
// log4j configuration
log4j.main = {
// Example of changing the log pattern for the default console appender:
//
//appenders {
// console name:'stdout', layout:pattern(conversionPattern: '%c{2} %m%n')
//}
error 'org.codehaus.groovy.grails.web.servlet', // controllers
'org.codehaus.groovy.grails.web.pages', // GSP
'org.codehaus.groovy.grails.web.sitemesh', // layouts
'org.codehaus.groovy.grails.web.mapping.filter', // URL mapping
'org.codehaus.groovy.grails.web.mapping', // URL mapping
'org.codehaus.groovy.grails.commons', // core / classloading
'org.codehaus.groovy.grails.plugins', // plugins
'org.codehaus.groovy.grails.orm.hibernate', // hibernate integration
'org.springframework',
'org.hibernate',
'net.sf.ehcache.hibernate'
}
/****************************** adding from the site *******************************/
elasticSearch {
/**
* Date formats used by the unmarshaller of the JSON responses
*/
elasticSearch.datastoreImpl="mongodbDatastore"
date.formats = ["yyyy-MM-dd'T'HH:mm:ss'Z'"]
/**
* Hosts for remote ElasticSearch instances.
* Will only be used with the "transport" client mode.
* If the client mode is set to "transport" and no hosts are defined, ["localhost", 9300] will be used by default.
*/
client.hosts = [
[host:'localhost', port:9300]
]
/**
* Default mapping property exclusions
*
* No properties matching the given names will be mapped by default
* ie, when using "searchable = true"
*
* This does not apply for classes using mapping by closure
*/
defaultExcludedProperties = ["password"]
/**
* Determines if the plugin should reflect any database save/update/delete automatically
* on the ES instance. Default to false.
*/
disableAutoIndex = false
/**
* Should the database be indexed at startup.
*
* The value may be a boolean true|false.
* Indexing is always asynchronous (compared to Searchable plugin) and executed after BootStrap.groovy.
*/
bulkIndexOnStartup = true
/**
* Max number of requests to process at once. Reduce this value if you have memory issue when indexing a big amount of data
* at once. If this setting is not specified, 500 will be use by default.
*/
maxBulkRequest = 500
/**
* The name of the ElasticSearch mapping configuration property that annotates domain classes. The default is 'searchable'.
*/
searchableProperty.name = 'searchable'
}
environments {
development {
/**
* Possible values : "local", "node", "dataNode", "transport"
* If set to null, "node" mode is used by default.
*/
elasticSearch.client.mode = 'local'
}
test {
elasticSearch {
client.mode = 'local'
index.store.type = 'memory' // store local node in memory and not on disk
}
}
production {
elasticSearch.client.mode = 'node'
}
}
But wile running the app , the following error is coming-:
|Loading Grails 2.5.0
|Configuring classpath
.
|Environment set to development
.................................
|Packaging Grails application
....
|Compiling 1 source files
...................................
|Running Grails application
Error |
2015-06-01 16:40:59,813 [localhost-startStop-1] ERROR context.GrailsContextLoaderListener - Error initializing the application: No datastore implementation specified
Message: No datastore implementation specified
Line | Method
->> 135 | doCall in ElasticsearchGrailsPlugin$_closure1
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
| 754 | invokeBeanDefiningClosure in grails.spring.BeanBuilder
| 584 | beans . . . . . . . . . . in ''
| 527 | invokeMethod in ''
| 266 | run . . . . . . . . . . . in java.util.concurrent.FutureTask
| 1142 | runWorker in java.util.concurrent.ThreadPoolExecutor
| 617 | run . . . . . . . . . . . in java.util.concurrent.ThreadPoolExecutor$Worker
^ 745 | run in java.lang.Thread
Error |
Forked Grails VM exited with errorJava HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
|Loading Grails 2.5.0
|Configuring classpath
.
|Environment set to development
.................................
|Packaging Grails application
....
|Compiling 1 source files
...................................
|Running Grails application
Error |
2015-06-01 16:40:59,813 [localhost-startStop-1] ERROR context.GrailsContextLoaderListener - Error initializing the application: No datastore implementation specified
Message: No datastore implementation specified
Line | Method
->> 135 | doCall in ElasticsearchGrailsPlugin$_closure1
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
| 754 | invokeBeanDefiningClosure in grails.spring.BeanBuilder
| 584 | beans . . . . . . . . . . in ''
| 527 | invokeMethod in ''
| 266 | run . . . . . . . . . . . in java.util.concurrent.FutureTask
| 1142 | runWorker in java.util.concurrent.ThreadPoolExecutor
| 617 | run . . . . . . . . . . . in java.util.concurrent.ThreadPoolExecutor$Worker
^ 745 | run in java.lang.Thread
Error |
Forked Grails VM exited with errorJava HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
I suspect you have to changes this line
elasticSearch.datastoreImpl="mongodbDatastore"
to
datastoreImpl="mongodbDatastore"
as you're already nested in elasticSearch namespace.