DOCKER: Pyspark reading from Postgresql doesn't show data - postgresql

I am trying to read data from a table in a postgresql database and proceed with an ETL project. I have an Docker enviroment using this docker-compose:
version: "3.3"
services:
spark-master:
image: docker.io/bitnami/spark:3.3
ports:
- "9090:8080"
- "7077:7077"
volumes:
- /opt/spark-apps
- /opt/spark-data
environment:
- SPARK_LOCAL_IP=spark-master
- SPARK_WORKLOAD=master
spark-worker-a:
image: docker.io/bitnami/spark:3.3
ports:
- "9091:8080"
- "7000:7000"
depends_on:
- spark-master
environment:
- SPARK_MASTER=spark://spark-master:7077
- SPARK_WORKER_CORES=1
- SPARK_WORKER_MEMORY=1G
- SPARK_DRIVER_MEMORY=1G
- SPARK_EXECUTOR_MEMORY=1G
- SPARK_WORKLOAD=worker
- SPARK_LOCAL_IP=spark-worker-a
volumes:
- /opt/spark-apps
- /opt/spark-data
spark-worker-b:
image: docker.io/bitnami/spark:3.3
ports:
- "9092:8080"
- "7001:7000"
depends_on:
- spark-master
environment:
- SPARK_MASTER=spark://spark-master:7077
- SPARK_WORKER_CORES=1
- SPARK_WORKER_MEMORY=1G
- SPARK_DRIVER_MEMORY=1G
- SPARK_EXECUTOR_MEMORY=1G
- SPARK_WORKLOAD=worker
- SPARK_LOCAL_IP=spark-worker-b
volumes:
- /opt/spark-apps
- /opt/spark-data
postgres:
container_name: postgres_container
image: postgres:11.7-alpine
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: admin
volumes:
- /data/postgres
ports:
- "4560:5432"
restart: unless-stopped
# jupyterlab with pyspark
jupyter-pyspark:
image: jupyter/pyspark-notebook:latest
environment:
JUPYTER_ENABLE_LAB: "yes"
ports:
- "9999:8888"
volumes:
- /app/data
I was succesful connecting to the DB, but I can't print any data. Here's my code:
from pyspark.sql import SparkSession
spark = SparkSession.builder\
.appName("salesETL")\
.config("spark.driver.extraClassPath", "./postgresql-42.5.1.jar")\
.getOrCreate()
df = spark.read.format("jdbc").option("url", "jdbc:postgresql://postgres_container:5432/postgres")\
.option("dbtable", "sales")\
.option("driver", "org.postgresql.Driver")\
.option("user", "admin")\
.option("password", "admin").load()
df.show(10).toPandas()
With .toPandas() it gives me this error:
AttributeError Traceback (most recent call last)
Cell In[7], line 1
----> 1 df.show(10).toPandas()
AttributeError: 'NoneType' object has no attribute 'toPandas'
Without .toPandas() it print the columns but no data
+--------+----------+-----------+-------------+-----------------+-------------+--------------+----------+--------+-----------+
|order_id|order_date|customer_id|customer_name|customer_lastname|customer_city|customer_state|product_id|quantity|order_value|
+--------+----------+-----------+-------------+-----------------+-------------+--------------+----------+--------+-----------+
+--------+----------+-----------+-------------+-----------------+-------------+--------------+----------+--------+-----------+
I am new to Pyspark/Spark so I can't figure out what I am missing. It's my very first project. What can it be?
ps: when I run type(df) it returns pyspark.sql.dataframe.DataFrame

show returns nothing. You should call pandas on the dataframe directly. Moreover, I think it's to_pandas not toPandas (https://spark.apache.org/docs/3.2.0/api/python/reference/pyspark.pandas/api/pyspark.pandas.DataFrame.to_pandas.html). So it seems the error will be vanished, with something like that:
df.to_pandas()
About the empty dataset, is there any error? If there is no error, are you sure that any records exist on the table?

Well, I couldn't find a justification of why this has happened and fix it. Instead, I took a workaround: I loaded data to Python using Pandas and then changed the pandas DF to Pyspark DF.
Here's my code:
import psycopg2
import pandas as pd
from pyspark.sql import SparkSession
from sqlalchemy import create_engine
appName = "salesETL"
master = "local"
spark = SparkSession.builder.master(master).appName(appName).getOrCreate()
engine = create_engine(
"postgresql+psycopg2://admin:admin#postgres_container/postgres?client_encoding=utf8")
pdf = pd.read_sql('select * from sales.sales', engine)
# Convert Pandas dataframe to spark DataFrame
df = spark.createDataFrame(pdf)

Related

Subkey yq query on docker-compose.yml

What I'm looking for is a yq query that returns the service names that are using
a specified volume for a given docker-compose.yml file.
For example, in the stripped down docker-compose.yml file below, say I am looking for the names of all services that
use the volume v-app-olorin.
version: "3"
services:
arwen:
this: that
volumes:
- v-app-mithrandir:/data/mithrandir
- v-app-olorin:/data/olorin
boromir:
volumes:
- v-app-mithrandir:/data/mithrandir
- v-app-stormcrow:/data/stormcrow
cirdan:
volumes:
- v-app-mithrandir:/data/mithrandir
- v-app-olorin:/data/olorin
volumes:
v-app-mithrandir:
name: v-app-mithrandir
v-app-olorin:
name: v-app-olorin
v-app-stormcrow:
name: v-app-stormcrow
The expected response would be:
arwen
cirdan
I can match simple key values with something like this:
yq e '.services | with_entries(select(.value.this == "that")) | to_entries | .[] | .key' docker-compose.yml
arwen
But I'm having trouble matching an element of the volumes array. Thank you for any help.
here's an expression that does that:
yq '.services[] | select(.volumes[] | contains("v-app-olorin")) | key' docker-compose.yml
Explanation:
splat out the services entries into their invidiual nodes .services[]
select the ones that have "v-app-olorin" in their volumes array: select(.volumes[] | contains("v-app-olorin"))
get the key of that services entry
Disclaimer: I wrote yq

Debezium with Postgres | Kafka Consumer not able to consume any message

Here is my docker-compose file:
version: '3.7'
services:
postgres:
image: debezium/postgres:12
container_name: postgres
networks:
- broker-kafka
environment:
POSTGRES_PASSWORD: admin
POSTGRES_USER: antriksh
ports:
- 5499:5432
zookeeper:
image: confluentinc/cp-zookeeper:latest
container_name: zookeeper
networks:
- broker-kafka
ports:
- 2181:2181
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka:
image: confluentinc/cp-kafka:latest
container_name: kafka
networks:
- broker-kafka
depends_on:
- zookeeper
ports:
- 9092:9092
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_LOG_CLEANER_DELETE_RETENTION_MS: 5000
KAFKA_BROKER_ID: 1
KAFKA_MIN_INSYNC_REPLICAS: 1
connector:
image: debezium/connect:latest
container_name: kafka_connect_with_debezium
networks:
- broker-kafka
ports:
- "8083:8083"
environment:
GROUP_ID: 1
CONFIG_STORAGE_TOPIC: my_connect_configs
OFFSET_STORAGE_TOPIC: my_connect_offsets
BOOTSTRAP_SERVERS: kafka:29092
depends_on:
- zookeeper
- kafka
networks:
broker-kafka:
driver: bridge
I am able to create table and insert data into it. I am also able to initialise connector using following config -
curl -X POST -H "Accept:application/json" -H "Content-Type:application/json" localhost:8083/connectors/ -d '
{
"name": "payment-connector",
"config": {
"connector.class": "io.debezium.connector.postgresql.PostgresConnector",
"tasks.max": "1",
"database.hostname": "postgres",
"database.port": "5432",
"database.user": "antriksh",
"database.password": "admin",
"database.dbname" : "payment",
"database.server.name": "dbserver1",
"database.whitelist": "payment",
"database.history.kafka.bootstrap.servers": "localhost:9092",
"database.history.kafka.topic": "schema-changes.payment",
"publication.name": "mytestpub",
"publication.autocreate.mode": "all_tables"
}
}'
I start my Kafka Consumer like this
kafka-console-consumer --bootstrap-server kafka:29092 --from-beginning --topic dbserver1.public.transaction --property print.key=true --property key.separator="-"
But whenever I update or insert any changes inside my db I don't see the messages being relayed to Kafka Consumer.
I have put the config property - "publication.autocreate.mode": "all_tables" which will create a publication automatically for all tables. But when I do select * from pg_publication I see nothing. It's an empty table.
There is a Debezium named replication slot so I don't know which config or step am I missing which is preventing Kafka Consumer to consume the message.
Update:
I found out that in order for Debezium to create publications automatically we will need pgoutput as the output plugin. Also after OneCricketer's comment this is my connector's config
curl -X POST -H "Accept:application/json" -H "Content-Type:application/json" localhost:8083/connectors/ -d '
{
"name": "payment-connector",
"config": {
"connector.class": "io.debezium.connector.postgresql.PostgresConnector",
"tasks.max": "1",
"database.hostname": "postgres",
"database.port": "5432",
"database.user": "antriksh",
"database.password": "admin",
"database.dbname" : "payment",
"database.server.name": "dbserver1",
"database.whitelist": "payment",
"database.history.kafka.bootstrap.servers": "kafka:29092",
"database.history.kafka.topic": "schema-changes.payment",
"plugin.name": "pgoutput",
"publication.autocreate.mode": "all_tables",
"publication.name": "my_publication"
}
}'
Now I am able to see the publication being created.
16395 | my_publication | 10 | t | t | t | t | t
The issue now seems to be that the LSN is not moving ahead when I check the replication_slots
select * from pg_replication_slots;
slot_name | plugin | slot_type | datoid | database | temporary | active | active_pid | xmin | catalog_xmin | restart_lsn | confirmed_flush_lsn
-----------+----------+-----------+--------+----------+-----------+--------+------------+------+--------------+-------------+---------------------
debezium | pgoutput | logical | 16385 | payment | f | t | 260 | | 491 | 0/176F268 | 0/176F268
(1 row)
It's stuck at 0/176F268 ever since the payment db was created.
When I see the topics list I can see that Debezium has created the topic for the transaction table
[appuser#a112a33992d1 ~]$ kafka-topics --zookeeper zookeeper:2181 --list
__consumer_offsets
connect-status
dbserver1.public.transaction
my_connect_configs
my_connect_offsets
I am unable to understand where is it going wrong.

Is there a way to setup a sink and source connector for this debezium connector?

I'm using the debezium-connector found here: https://repo1.maven.org/maven2/io/debezium/debezium-connector-oracle/1.4.0.Final/debezium-connector-oracle-1.4.0.Final-plugin.tar.gz
And I'm following these instructions for docker-compose: https://github.com/confluentinc/demo-scene/blob/master/oracle-and-kafka/docker-compose.yml
I did it for jdbc-connector by using confluent-hub but I don't know how to do it for debezium. It's not solved by adding it into /usr/share/java and running
So my docker-compose is:
---
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:6.0.1
hostname: zookeeper
container_name: zookeeper
volumes:
- /dados/persistence/zookeeper/data:/var/lib/zookeeper/data
- /dados/persistence/zookeeper/log:/var/lib/zookeeper/log
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka:
image: confluentinc/cp-server:6.0.1
hostname: broker
container_name: broker
volumes:
- /dados/persistence/broker/data:/var/lib/kafka/data
depends_on:
- zookeeper
ports:
- "9092:9092"
- "9101:9101"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9101
KAFKA_JMX_HOSTNAME: localhost
KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: kafka:29092
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
schema-registry:
image: confluentinc/cp-schema-registry:6.0.1
hostname: schema-registry
container_name: schema-registry
depends_on:
- zookeeper
- kafka
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: zookeeper:2181
kafka-connect:
image: cnfldemos/cp-server-connect-datagen:0.4.0-6.0.1
hostname: connect
container_name: kafka-connect
volumes:
- /dados/packages/confluent-hub/share/confluent-hub-components:/usr/share/confluent-hub-components/custom
- /dados/persistence/kafka-connect/jars:/etc/kafka-connect/jars
depends_on:
- zookeeper
- kafka
- schema-registry
ports:
- "8083:8083"
environment:
CONNECT_BOOTSTRAP_SERVERS: 'kafka:29092'
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_REST_ADVERTISED_HOST_NAME: "kafka-connect"
CONNECT_LOG4J_ROOT_LOGLEVEL: "INFO"
CONNECT_LOG4J_APPENDER_STDOUT_LAYOUT_CONVERSIONPATTERN: "[%d] %p %X{connector.context}%m (%c:%L)%n"
CONNECT_LOG4J_LOGGERS: "org.apache.kafka.connect.runtime.rest=WARN,org.reflections=ERROR"
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components,/usr/share/confluent-hub-components/custom"
LD_LIBRARY_PATH: '/usr/share/java/debezium-connector-oracle/instantclient_19_6/'
control-center:
image: confluentinc/cp-enterprise-control-center:6.0.1
hostname: control-center
container_name: control-center
depends_on:
- kafka
- schema-registry
- kafka-connect
- ksqldb
ports:
- "9021:9021"
environment:
CONTROL_CENTER_BOOTSTRAP_SERVERS: 'kafka:29092'
CONTROL_CENTER_CONNECT_CLUSTER: 'kafka-connect:8083'
CONTROL_CENTER_KSQL_KSQLDB1_URL: "http://10.58.0.207:8088"
CONTROL_CENTER_KSQL_KSQLDB1_ADVERTISED_URL: "http://10.58.0.207:8088"
CONTROL_CENTER_SCHEMA_REGISTRY_URL: "http://10.58.0.207:8081"
CONTROL_CENTER_REPLICATION_FACTOR: 1
CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS: 1
CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS: 1
CONFLUENT_METRICS_TOPIC_REPLICATION: 1
PORT: 9021
ksqldb:
image: confluentinc/cp-ksqldb-server:6.0.1
hostname: ksqldb
container_name: ksqldb-server
depends_on:
- kafka
- kafka-connect
ports:
- "8088:8088"
environment:
KSQL_CONFIG_DIR: "/etc/ksql"
KSQL_LISTENERS: "http://0.0.0.0:8088"
KSQL_BOOTSTRAP_SERVERS: kafka:29092
KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: "true"
KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: "true"
KSQL_KSQL_CONNECT_URL: http://kafka-connect:8083
KSQL_KSQL_SCHEMA_REGISTRY_URL: http://schema-registry:8081
ksqldb-cli:
image: confluentinc/cp-ksqldb-cli:6.0.1
container_name: ksqldb-cli
depends_on:
- kafka
- kafka-connect
- ksqldb
entrypoint: /bin/sh
tty: true
ksql-datagen:
image: confluentinc/ksqldb-examples:6.0.1
hostname: ksql-datagen
container_name: ksql-datagen
depends_on:
- ksqldb
- kafka
- schema-registry
- kafka-connect
command: "bash -c 'echo Waiting for Kafka to be ready... && \
cub kafka-ready -b broker:29092 1 40 && \
echo Waiting for Confluent Schema Registry to be ready... && \
cub sr-ready schema-registry 8081 40 && \
echo Waiting a few seconds for topic creation to finish... && \
sleep 11 && \
tail -f /dev/null'"
environment:
KSQL_CONFIG_DIR: "/etc/ksql"
STREAMS_BOOTSTRAP_SERVERS: kafka:29092
STREAMS_SCHEMA_REGISTRY_HOST: schema-registry
STREAMS_SCHEMA_REGISTRY_PORT: 8081
rest-proxy:
image: confluentinc/cp-kafka-rest:6.0.1
depends_on:
- kafka
- schema-registry
ports:
- 8082:8082
hostname: rest-proxy
container_name: rest-proxy
environment:
KAFKA_REST_HOST_NAME: rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: 'kafka:29092'
KAFKA_REST_LISTENERS: "http://0.0.0.0:8082"
KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
You need to add /etc/kafka-connect/jars to CONNECT_PLUGIN_PATH

Unable to initialize MongoDB When the Container Starts

Here is my docker-compose.yaml:
version: '3.3'
mongo:
build:
context: '.'
dockerfile: 'Dockerfile'
environment:
MONGO_INITDB_DATABASE: 'mydb'
ports:
- '27017:27017'
volumes:
- 'data-storage:/data/db'
networks:
mynet:
volumes:
data-storage:
networks:
mynet:
Here is my Dockerfile:
FROM mongo:latest
COPY ./initdb.js /docker-entrypoint-initdb.d/
And finally here is my inidb.js:
db.createCollection("strategyitems");
db.strategyitems.createIndex( {strategy: 1 }, { unique: false } );
db.strategyitems.createIndex( {strategy: 1, symbol: 1 }, { unique: true } );
db.strategyitems.insertMany([
{ strategy: "crypto", symbol: "btcusd", eval_period: 15, buy_booster: 8.0, sell_booster: 5.0, buy_lot: 0.2, sell_lot: 0.2 },
{ strategy: "crypto", symbol: "ethusd", eval_period: 15, buy_booster: 8.0, sell_booster: 5.0, buy_lot: 0.2, sell_lot: 0.2 },
{ strategy: "crypto", symbol: "neousd", eval_period: 15, buy_booster: 8.0, sell_booster: 5.0, buy_lot: 0.2, sell_lot: 0.2 }
]);
The container builds and starts successfully... but no way to get the db statements above executed.
If I log into the container, folder /docker-entrypoint-initdb.d/ contains initdb.js... so I'd expect the db get intialized.
Am I missing something?
So the supplied compose file doesn't work for me, I had to edit it to get it up & running (v18.06 CE), so heads-up on that.
version: '3.3'
services:
mongo:
build:
context: .
dockerfile: Dockerfile
environment:
MONGO_INITDB_DATABASE: 'mydb'
ports:
- '27017:27017'
volumes:
- 'data-storage:/data/db'
networks:
mynet:
volumes:
data-storage:
networks:
mynet:
Next, if you'd run docker-compose up before adding the initdb.js file and then stopped with docker-compose down, then docker-compose down stops the containers, but doesn't remove the volume
docker ps
| CONTAINER | ID | IMAGE | COMMAND | CREATED | STATUS | PORTS | NAMES | | | |
|--------------|------------------|----------------------|---------|---------|--------|-------|-------|---------|--------------------------|--------------------|
| c412bbd9a22b | lumberjack_mongo | docker-entrypoint.s… | 7 | minutes | ago | Up | 6 | minutes | 0.0.0.0:27017->27017/tcp | lumberjack_mongo_1 |
docker volume ls
| DRIVER | | VOLUME | NAME |
|--------|---|--------|-------------------------|
| local | | | lumberjack_data-storage |
docker-compose down
Removing lumberjack_mongo_1 ... done
Removing network lumberjack_mynet
docker volume ls
| DRIVER | | VOLUME | NAME |
|--------|---|--------|-------------------------|
| local | | | lumberjack_data-storage |
The problem arises when docker-compose up is run when the volume exists - Docker mounts the volume before the container starts up. Mongo does some pre-checks and if it finds that the directories are present, then skips the initdb sequence.
If you remove the volume after docker-compose down and do a docker-compose up, the volume will be created from scratch, the pre-check finds nothing and initializes the mongodb
docker volume rm lumberjack_data-storage
lumberjack_data-storage
docker-compose up
Creating network "lumberjack_mynet" with the default driver
Creating volume "lumberjack_data-storage" with default driver
Creating lumberjack_mongo_1 ... done
Attaching to lumberjack_mongo_1
[....]
mongo_1 | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/initdb.js
mongo_1 | 2018-08-04T18:08:47.699+0000 I INDEX [LogicalSessionCacheRefresh] build index on: config.system.sessions properties: { v: 2, key: { lastUse: 1 }, name: "lsidTTLIndex", ns: "config.system.sessions", expireAfterSeconds: 1800 }
mongo_1 | 2018-08-04T18:08:47.745+0000 I NETWORK [conn2] received client metadata from 127.0.0.1:45324 conn2: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
mongo_1 | 2018-08-04T18:08:47.747+0000 I STORAGE [conn2] createCollection: initdb.strategyitems with generated UUID: 585edb14-bc63-4879-bc5d-504867fb5e12
mongo_1 | 2018-08-04T18:08:47.851+0000 I INDEX [conn2] build index on: initdb.strategyitems properties: { v: 2, key: { strategy: 1.0 }, name: "strategy_1", ns: "initdb.strategyitems" }
mongo_1 | 2018-08-04T18:08:47.851+0000 I INDEX [conn2] building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongo_1 | 2018-08-04T18:08:47.852+0000 I INDEX [conn2] build index done. scanned 0 total records. 0 secs
mongo_1 | 2018-08-04T18:08:47.881+0000 I INDEX [conn2] build index on: initdb.strategyitems properties: { v: 2, unique: true, key: { strategy: 1.0, symbol: 1.0 }, name: "strategy_1_symbol_1", ns: "initdb.strategyitems" }
mongo_1 | 2018-08-04T18:08:47.881+0000 I INDEX [conn2] building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongo_1 | 2018-08-04T18:08:47.882+0000 I INDEX [conn2] build index done. scanned 0 total records. 0 secs
mongo_1 | 2018-08-04T18:08:47.886+0000 I NETWORK [conn2] end connection 127.0.0.1:45324 (0 connections now open)
[....]
mongo_1 | MongoDB init process complete; ready for start up.
mongo_1 |
mongo_1 | 2018-08-04T18:08:48.933+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
mongo_1 | 2018-08-04T18:08:48.939+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=e90c80083360

Grails audit logging plugin for mongodb is not working

I am using grails 2.2.3 , mongodb 1.3.3 the curd operation is working fine.
I want to log my curd operation
so i use audit-logging plugin "audit-logging:1.0.0",it work fine with mysql-database but not with mongodb.It shows
Error 2014-05-05 15:45:04,117 [localhost-startStop-1] ERROR context.GrailsContextLoader - Error initializing the application: Cannot get property 'datastores' on null object
Message: Cannot get property 'datastores' on null object
Line | Method
->> 90 | doCall in AuditLoggingGrailsPlugin$_closure1
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
| 303 | innerRun in java.util.concurrent.FutureTask$Sync
| 138 | run . . in java.util.concurrent.FutureTask
| 886 | runTask in java.util.concurrent.ThreadPoolExecutor$Worker
| 908 | run . . in ''
^ 662 | run in java.lang.Thread
Any one came across this issue.
help me to solve this.
Thanks in advance. cheers..
I suggest your report the issue http://jira.grails.org/browse/GPAUDITLOGGING