.plaso to elasticsearch using psort.py: authentication error - elastic-stack

I have been trying to load .plaso file to elk engine using psort.py. But i am getting the Authentication Error.
Error:
elasticsearch.exceptions.AuthenticationException: TransportError(401, u'')
also I have disabled xpath security
Other info:
2017-02-25 17:28:27,720 [INFO] (MainProcess) PID:13544 <elastic> Server address:
127.0.0.1
2017-02-25 17:28:27,727 [INFO] (MainProcess) PID:13544 <elastic> Server port: 92
00
2017-02-25 17:28:27,732 [INFO] (MainProcess) PID:13544 <elastic> Index name: c8b
d7007962e40ae8bc2360da5661706
2017-02-25 17:28:27,737 [INFO] (MainProcess) PID:13544 <elastic> Document type:
plaso_event
2017-02-25 17:28:27,743 [INFO] (MainProcess) PID:13544 <elastic> Flush interval:
1000
2017-02-25 17:28:27,747 [INFO] (MainProcess) PID:13544 <elastic> Add non analyze
d string fields: False
Traceback (most recent call last):
File "C:\Python27\Scripts\psort.py", line 880, in <module>
if not Main():
File "C:\Python27\Scripts\psort.py", line 866, in Main
tool.ProcessStorage()
File "C:\Python27\Scripts\psort.py", line 810, in ProcessStorage
Thanks for the help

Related

spring boot data initialize with data.sql gets error

I am trying to initialize the database using a sql file at the startup of the application.
Here is my application.yml
server:
port: 8090
servlet:
context-path: /data-service
logging:
config: classpath:logback-spring.xml
spring:
profiles:
active: default
datasource:
driverClassName: org.postgresql.Driver
url: jdbc:postgresql://localhost:5432/my_db
username: db_user
password: db_pwd
jpa:
generate-ddl: true
hibernate:
ddl-auto: update
database-platform: org.hibernate.dialect.PostgreSQL9Dialect
show-sql: true
defer-datasource-initialization: true
sql:
init:
mode: always
platform: postgresql
and here is my insert script
insert into region (id,country,region_code,federal_state,city,location,part_of_town,post_code) VALUES (gen_random_uuid(), 'DE', 'DE-BW', 'Baden-Württemberg', 'Freiburg', 'Breisgau-Hochschwarzwald', 'Bad Krozingen"', '79189');
but it gets a script error while starting the application
Caused by: org.postgresql.util.PSQLException: ERROR: syntax error at or near "insert"
Position: 1
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2676)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2366)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:356)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:490)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:408)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:329)
at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:315)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:291)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:286)
at com.zaxxer.hikari.pool.ProxyStatement.execute(ProxyStatement.java:94)
at com.zaxxer.hikari.pool.HikariProxyStatement.execute(HikariProxyStatement.java)
at org.springframework.jdbc.datasource.init.ScriptUtils.executeSqlScript(ScriptUtils.java:261)
... 28 common frames omitted
couldnt find a way to solve it, thanks

Lagom create static µ-service cluster

i wrote a few µ-services with lagom.
I have a docker-compose file in which all µ-services will be manual created.
Now I have a problem.
If I use discovery with akka-dns their is nothing found.
If I use static and give them the exposed ports their is also not found. Where is my mistake?
Docker compose file:
security:
container_name: panakeia-security
image: nexus.familieschmidt.online/andre-user/panakeia/security-impl:latest
environment:
- APPLICATION_SECRET=hlPlU12MK?[oF1Xj`>xd>CtCjTHohfu0=ekVFOo?r]lH^GpFo5o?kurLFO6sQPzD
- POSTGRESQL_URL=jdbc:postgresql://****SERVER-IP****:12000/panakeia
- POSTGRESQL_USERNAME=panakeia
- POSTGRESQL_PASSWORD=123456789
- INIT_USERNAME=test#test.de
- INIT_USERPASS=123456
- REQUIRED_CONTACT_POINT_NR=1
- JAVA_OPTS=-Dconfig.resource=prod-application.conf -Dplay.server.pidfile.path=/dev/null
expose:
- 9000
- 15000
# - 8558
ports:
- "15000:15000"
- "14999:9000"
networks:
- panakeia-network
My loader class
class SecurityLoader extends LagomApplicationLoader {
override def load(context: LagomApplicationContext): LagomApplication =
new SecurityApplication(context) with ConfigurationServiceLocatorComponents {
// override def staticServiceUri: URI = URI.create("http://localhost:9000")
} //AkkaDiscoveryComponents
override def loadDevMode(context: LagomApplicationContext): LagomApplication =
new SecurityApplication(context) with LagomDevModeComponents
override def describeService = Some(readDescriptor[SecurityService])
}
and my production.conf
include "application"
play {
server {
pidfile.path = "/dev/null"
}
http.secret.key = "${APPLICATION_SECRET}"
}
db.default {
url = ${POSTGRESQL_URL}
username = ${POSTGRESQL_USERNAME}
password = ${POSTGRESQL_PASSWORD}
}
user.init {
username = ${INIT_USERNAME}
password = ${INIT_USERPASS}
}
pac4j.jwk = {"kty":"oct","k":${JWK_KEY},"alg":"HS512"}
lagom.persistence.jdbc.create-tables.auto = true
//akka {
// discovery.method = akka-dns
//
// cluster {
// shutdown-after-unsuccessful-join-seed-nodes = 60s
// }
//
// management {
// cluster.bootstrap {
// contact-point-discovery {
// discovery-method = akka.discovery
//// discovery-method = akka-dns
// service-name = "security-service"
// required-contact-point-nr = ${REQUIRED_CONTACT_POINT_NR}
// }
// }
// }
//}
lagom.services {
security-impl = "http://****SERVER-IP****:15000"
// serviceB = "http://10.1.2.4:8080"
}
For other domains on the same server i have a reverse nginx. But not for this project.
When I run http://SERVER-IP:14999 in browser I get a 404 NotFound typical Play screen message. :(
I have no problem to write the compose file and to link them manual. I have also no problem to use akka-dns. But I will have no kubernetes or something else.
Thanks for you help
Here is a log of one the µ-service
2021-04-02T14:38:24.151Z [info] akka.event.slf4j.Slf4jLogger [] - Slf4jLogger started
2021-04-02T14:38:24.409Z [info] akka.remote.artery.tcp.ArteryTcpTransport [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=ArteryTcpTransport(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:24.409UTC] - Remoting started with transport [Artery tcp]; listening on address [akka://application#192.168.0.5:25520] with UID [-580933006609059378]
2021-04-02T14:38:24.427Z [info] akka.cluster.Cluster [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=Cluster(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:24.427UTC] - Cluster Node [akka://application#192.168.0.5:25520] - Starting up, Akka version [2.6.8] ...
2021-04-02T14:38:24.527Z [info] akka.cluster.Cluster [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=Cluster(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:24.527UTC] - Cluster Node [akka://application#192.168.0.5:25520] - Registered cluster JMX MBean [akka:type=Cluster]
2021-04-02T14:38:24.528Z [info] akka.cluster.Cluster [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=Cluster(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:24.527UTC] - Cluster Node [akka://application#192.168.0.5:25520] - Started up successfully
2021-04-02T14:38:24.552Z [info] akka.cluster.Cluster [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.internal-dispatcher-2, akkaSource=Cluster(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:24.552UTC] - Cluster Node [akka://application#192.168.0.5:25520] - No downing-provider-class configured, manual cluster downing required, see https://doc.akka.io/docs/akka/current/typed/cluster.html#downing
2021-04-02T14:38:24.552Z [info] akka.cluster.Cluster [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.internal-dispatcher-2, akkaSource=Cluster(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:24.552UTC] - Cluster Node [akka://application#192.168.0.5:25520] - No seed-nodes configured, manual cluster join required, see https://doc.akka.io/docs/akka/current/typed/cluster.html#joining
2021-04-02T14:38:25.383Z [info] akka.io.DnsExt [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=DnsExt(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:25.383UTC] - Creating async dns resolver async-dns with manager name SD-DNS
2021-04-02T14:38:25.386Z [info] akka.management.cluster.bootstrap.ClusterBootstrap [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=ClusterBootstrap(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:25.386UTC] - Bootstrap using default `akka.discovery` method: AggregateServiceDiscovery
2021-04-02T14:38:25.394Z [info] akka.management.internal.HealthChecksImpl [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=HealthChecksImpl(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:25.394UTC] - Loading readiness checks List(NamedHealthCheck(cluster-membership,akka.management.cluster.scaladsl.ClusterMembershipCheck))
2021-04-02T14:38:25.394Z [info] akka.management.internal.HealthChecksImpl [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=HealthChecksImpl(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:25.394UTC] - Loading liveness checks List()
2021-04-02T14:38:25.488Z [info] akka.management.scaladsl.AkkaManagement [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=AkkaManagement(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:25.488UTC] - Binding Akka Management (HTTP) endpoint to: 192.168.0.5:8558
2021-04-02T14:38:25.541Z [info] akka.management.scaladsl.AkkaManagement [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=AkkaManagement(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:25.541UTC] - Including HTTP management routes for ClusterHttpManagementRouteProvider
2021-04-02T14:38:25.581Z [info] akka.management.scaladsl.AkkaManagement [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=AkkaManagement(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:25.581UTC] - Including HTTP management routes for ClusterBootstrap
2021-04-02T14:38:25.585Z [info] akka.management.cluster.bootstrap.ClusterBootstrap [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=ClusterBootstrap(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:25.584UTC] - Using self contact point address: http://192.168.0.5:8558
2021-04-02T14:38:25.600Z [info] akka.management.scaladsl.AkkaManagement [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=AkkaManagement(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:25.600UTC] - Including HTTP management routes for HealthCheckRoutes
2021-04-02T14:38:26.108Z [info] akka.management.scaladsl.AkkaManagement [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-18, akkaSource=AkkaManagement(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:26.108UTC] - Bound Akka Management (HTTP) endpoint to: 192.168.0.5:8558
2021-04-02T14:38:26.154Z [info] akka.management.cluster.bootstrap.ClusterBootstrap [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=ClusterBootstrap(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:26.154UTC] - Initiating bootstrap procedure using akka.discovery method...
2021-04-02T14:38:26.163Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-5, akkaSource=akka://application#192.168.0.5:25520/system/bootstrapCoordinator, sourceActorSystem=application, akkaTimestamp=14:38:26.163UTC] - Locating service members. Using discovery [akka.discovery.aggregate.AggregateServiceDiscovery], join decider [akka.management.cluster.bootstrap.LowestAddressJoinDecider]
2021-04-02T14:38:26.164Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-5, akkaSource=akka://application#192.168.0.5:25520/system/bootstrapCoordinator, sourceActorSystem=application, akkaTimestamp=14:38:26.163UTC] - Looking up [Lookup(application,None,Some(tcp))]
2021-04-02T14:38:26.189Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-5, akkaSource=akka://application#192.168.0.5:25520/system/bootstrapCoordinator, sourceActorSystem=application, akkaTimestamp=14:38:26.189UTC] - Located service members based on: [Lookup(application,None,Some(tcp))]: [], filtered to []
2021-04-02T14:38:26.195Z [info] play.api.db.DefaultDBApi [] - Database [default] initialized
2021-04-02T14:38:26.200Z [info] play.api.db.HikariCPConnectionPool [] - Creating Pool for datasource 'default'
2021-04-02T14:38:26.207Z [info] com.zaxxer.hikari.HikariDataSource [] - HikariPool-1 - Starting...
2021-04-02T14:38:26.218Z [info] com.zaxxer.hikari.HikariDataSource [] - HikariPool-1 - Start completed.
2021-04-02T14:38:26.221Z [info] play.api.db.HikariCPConnectionPool [] - datasource [default] bound to JNDI as DefaultDS
2021-04-02T14:38:26.469Z [info] akka.cluster.sharding.ShardRegion [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.internal-dispatcher-4, akkaSource=akka://application#192.168.0.5:25520/system/sharding/ProfileProcessor, sourceActorSystem=application, akkaTimestamp=14:38:26.469UTC] - ProfileProcessor: Idle entities will be passivated after [2.000 min]
2021-04-02T14:38:26.480Z [info] akka.cluster.sharding.typed.scaladsl.ClusterSharding [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=ClusterSharding(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:26.480UTC] - Starting Shard Region [ProfileEntity]...
2021-04-02T14:38:26.483Z [info] akka.cluster.sharding.ShardRegion [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.internal-dispatcher-3, akkaSource=akka://application#192.168.0.5:25520/system/sharding/ProfileEntity, sourceActorSystem=application, akkaTimestamp=14:38:26.483UTC] - ProfileEntity: Idle entities will be passivated after [2.000 min]
2021-04-02T14:38:26.534Z [info] play.api.Play [] - Application started (Prod) (no global state)
2021-04-02T14:38:26.553Z [info] play.core.server.AkkaHttpServer [] - Listening for HTTP on /0.0.0.0:9000
2021-04-02T14:38:27.188Z [info] akka.management.cluster.bootstrap.LowestAddressJoinDecider [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-5, akkaSource=LowestAddressJoinDecider(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:27.187UTC] - Discovered [0] contact points, confirmed [0], which is less than the required [2], retrying
2021-04-02T14:38:27.363Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-5, akkaSource=akka://application#192.168.0.5:25520/system/bootstrapCoordinator, sourceActorSystem=application, akkaTimestamp=14:38:27.363UTC] - Looking up [Lookup(application,None,Some(tcp))]
2021-04-02T14:38:27.369Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-19, akkaSource=akka://application#192.168.0.5:25520/system/bootstrapCoordinator, sourceActorSystem=application, akkaTimestamp=14:38:27.369UTC] - Located service members based on: [Lookup(application,None,Some(tcp))]: [], filtered to []
2021-04-02T14:38:28.183Z [info] akka.management.cluster.bootstrap.LowestAddressJoinDecider [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-19, akkaSource=LowestAddressJoinDecider(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:28.183UTC] - Discovered [0] contact points, confirmed [0], which is less than the required [2], retrying
2021-04-02T14:38:28.563Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-5, akkaSource=akka://application#192.168.0.5:25520/system/bootstrapCoordinator, sourceActorSystem=application, akkaTimestamp=14:38:28.563UTC] - Looking up [Lookup(application,None,Some(tcp))]
2021-04-02T14:38:28.564Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-18, akkaSource=akka://application#192.168.0.5:25520/system/bootstrapCoordinator, sourceActorSystem=application, akkaTimestamp=14:38:28.564UTC] - Located service members based on: [Lookup(application,None,Some(tcp))]: [], filtered to []
2021-04-02T14:38:29.183Z [info] akka.management.cluster.bootstrap.LowestAddressJoinDecider [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-5, akkaSource=LowestAddressJoinDecider(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:29.183UTC] - Discovered [0] contact points, confirmed [0], which is less than the required [2], retrying
2021-04-02T14:38:29.724Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-22, akkaSource=akka://application#192.168.0.5:25520/system/bootstrapCoordinator, sourceActorSystem=application, akkaTimestamp=14:38:29.723UTC] - Looking up [Lookup(application,None,Some(tcp))]
2021-04-02T14:38:29.729Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-18, akkaSource=akka://application#192.168.0.5:25520/system/bootstrapCoordinator, sourceActorSystem=application, akkaTimestamp=14:38:29.729UTC] - Located service members based on: [Lookup(application,None,Some(tcp))]: [], filtered to []
2021-04-02T14:38:30.183Z [info] akka.management.cluster.bootstrap.LowestAddressJoinDecider [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-18, akkaSource=LowestAddressJoinDecider(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:30.183UTC] - Discovered [0] contact points, confirmed [0], which is less than the required [2], retrying
It looks like the startup process is completly going wrong.
Inside of the securityapplication class it should be generated a user with this code
val newUUID = UUID.randomUUID()
clusterSharding.entityRefFor(Profile.typedKey,newUUID.toString).ask[ProfileConfirmation](reply => CreateProfile(username,password,Some(EmailAddress(username)),Set(SuperAdminRole), PersonalData("","",""),reply ))(30.seconds).map{
case ProfileCmdAccepted(profile) => s"Profile ${profile.login} created"
case ProfileCmdRejected(err) => s"ERROR while default user init: $err"
case a => s"a: $a"
}
But there is throwing a Timeout exception. I think the cluster service is not initialized. At demo all working.
Do someone has an idea?
Okay found it
in the production conf I added
lagom.services {
security-service = "http://"${REMOTE_IP}":15000"
binary-service = "http://"${REMOTE_IP}":15001"
}
lagom.cluster.bootstrap.enabled = false
In the lagom Application loader in method load I inheriated my application from
ConfigurationServiceLocatorComponents
And now really important!
The docker must configured!
I used docker compose.
First adding a netowrk!
networks:
panakeia-network:
# driver: bridge
ipam:
driver: default
config:
- subnet: 172.28.0.0/16
AND now I give the cluster.seed-nodes.0 the ip of the service! And name the system! That is also important!
security:
container_name: panakeia-security
image: security-impl:latest
environment:
- APPLICATION_SECRET=****
- POSTGRESQL_URL=jdbc:postgresql://postgres-database:5432/panakeia
- POSTGRESQL_USERNAME=????
- POSTGRESQL_PASSWORD=******
- INIT_USERNAME=test#test.de
- INIT_USERPASS=*******
- JWK_KEY=******
- REQUIRED_CONTACT_POINT_NR=1
- KAFKA_BROKER=REALLY_IP/OR_KAFKA_SERVICE_NAME:9092
- REMOTE_IP=????
- JAVA_OPTS=-Dconfig.resource=prod-application.conf -Dplay.server.pidfile.path=/dev/null -Dakka.cluster.seed-nodes.0=akka://panakeia#172.28.1.5:25520 -Dplay.akka.actor-system=panakeia
expose:
- 9000
- 15000
ports:
- "15000:15000"
- "14999:9000"
networks:
panakeia-network:
ipv4_address: 172.28.1.5
With this way it looks like it possible to create a simple and small µ-services cluster without complex systems like kubernetes.

Spring Batch Job does not enter STOPPED state, application never exits

I have a Spring batch job that was created from the documentation.
The application never exits. The Job is in the repository with STATUS=STOPPING, EXIT_CODE=COMPLETED.
I would expect the Job to have status of STOPPED. Is the appropriate way to exit application by adding a System.exit()?
I added logging in an afterJob() listener and it notes the status of the only Step as COMPLETED with an Step Exit Status of COMPLETED as well. Interestingly, the jobExecution also has a job status of COMPLETED, but the default logging from the SimpleJobLauncher afterwards logs a status of STOPPING.
Spring Batch starter 2.1.2.RELEASE
EDIT:
[BatchJobCompletionNotificationListener] !!! JOB FINISHED!
[BatchJobCompletionNotificationListener] Job no longer running with status [STOPPING] exit status [exitCode=COMPLETED;exitDescription=].
[BatchJobCompletionNotificationListener] Step [step1] status [COMPLETED] exit status [exitCode=COMPLETED;exitDescription=].
[SimpleJobLauncher] Job: [FlowJob: [name=myjob]] completed with the following parameters: [{run.id=62, -spring.config.location=application.yaml}] and the following status: [STOPPING]
That is the end of my logging.
Not sure if it should make a difference, but I'm using a JdbcPagingItemReader with a PagingQueryProvider (based on this example
Just tried the getting started guide with boot 2.1.2.RLEASE and it works as expected:
# mbenhassine # localhost in /tmp [9:57:11]
$ git clone https://github.com/spring-guides/gs-batch-processing
Cloning into 'gs-batch-processing'...
remote: Enumerating objects: 1121, done.
remote: Total 1121 (delta 0), reused 0 (delta 0), pack-reused 1121
Receiving objects: 100% (1121/1121), 448.58 KiB | 205.00 KiB/s, done.
Resolving deltas: 100% (682/682), done.
# mbenhassine # localhost in /tmp [9:57:35]
$ cd gs-batch-processing/complete
# mbenhassine # localhost in /tmp/gs-batch-processing/complete on git:master o [9:57:45]
$ vim pom.xml
# mbenhassine # localhost in /tmp/gs-batch-processing/complete on git:master x [9:58:16]
$ cat pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>org.springframework</groupId>
<artifactId>gs-batch-processing</artifactId>
<version>0.1.0</version>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.1.2.RELEASE</version>
</parent>
<properties>
<java.version>1.8</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-batch</artifactId>
</dependency>
<dependency>
<groupId>org.hsqldb</groupId>
<artifactId>hsqldb</artifactId>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
# mbenhassine # localhost in /tmp/gs-batch-processing/complete on git:master x [9:58:22]
$ mvn clean package
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building gs-batch-processing 0.1.0
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- maven-clean-plugin:3.1.0:clean (default-clean) # gs-batch-processing ---
[INFO]
[INFO] --- maven-resources-plugin:3.1.0:resources (default-resources) # gs-batch-processing ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 0 resource
[INFO] Copying 2 resources
[INFO]
[INFO] --- maven-compiler-plugin:3.8.0:compile (default-compile) # gs-batch-processing ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 5 source files to /private/tmp/gs-batch-processing/complete/target/classes
[INFO]
[INFO] --- maven-resources-plugin:3.1.0:testResources (default-testResources) # gs-batch-processing ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /private/tmp/gs-batch-processing/complete/src/test/resources
[INFO]
[INFO] --- maven-compiler-plugin:3.8.0:testCompile (default-testCompile) # gs-batch-processing ---
[INFO] No sources to compile
[INFO]
[INFO] --- maven-surefire-plugin:2.22.1:test (default-test) # gs-batch-processing ---
[INFO] No tests to run.
[INFO]
[INFO] --- maven-jar-plugin:3.1.1:jar (default-jar) # gs-batch-processing ---
[INFO] Building jar: /private/tmp/gs-batch-processing/complete/target/gs-batch-processing-0.1.0.jar
[INFO]
[INFO] --- spring-boot-maven-plugin:2.1.2.RELEASE:repackage (repackage) # gs-batch-processing ---
[INFO] Replacing main artifact with repackaged archive
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 3.558 s
[INFO] Finished at: 2019-02-27T09:58:36+01:00
[INFO] Final Memory: 25M/94M
[INFO] ------------------------------------------------------------------------
# mbenhassine # localhost in /tmp/gs-batch-processing/complete on git:master x [9:58:36]
$ java -jar target/gs-batch-processing-0.1.0.jar
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.1.2.RELEASE)
2019-02-27 09:58:47.669 INFO 85483 --- [ main] hello.Application : Starting Application v0.1.0 on localhost with PID 85483 (/private/tmp/gs-batch-processing/complete/target/gs-batch-processing-0.1.0.jar started by mbenhassine in /private/tmp/gs-batch-processing/complete)
2019-02-27 09:58:47.672 INFO 85483 --- [ main] hello.Application : No active profile set, falling back to default profiles: default
2019-02-27 09:58:48.491 INFO 85483 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
2019-02-27 09:58:48.497 WARN 85483 --- [ main] com.zaxxer.hikari.util.DriverDataSource : Registered driver with driverClassName=org.hsqldb.jdbcDriver was not found, trying direct instantiation.
2019-02-27 09:58:48.781 INFO 85483 --- [ main] com.zaxxer.hikari.pool.PoolBase : HikariPool-1 - Driver does not support get/set network timeout for connections. (feature not supported)
2019-02-27 09:58:48.785 INFO 85483 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Start completed.
2019-02-27 09:58:49.137 INFO 85483 --- [ main] o.s.b.c.r.s.JobRepositoryFactoryBean : No database type set, using meta data indicating: HSQL
2019-02-27 09:58:49.298 INFO 85483 --- [ main] o.s.b.c.l.support.SimpleJobLauncher : No TaskExecutor has been set, defaulting to synchronous executor.
2019-02-27 09:58:49.446 INFO 85483 --- [ main] hello.Application : Started Application in 2.117 seconds (JVM running for 2.528)
2019-02-27 09:58:49.448 INFO 85483 --- [ main] o.s.b.a.b.JobLauncherCommandLineRunner : Running default command line with: []
2019-02-27 09:58:49.532 INFO 85483 --- [ main] o.s.b.c.l.support.SimpleJobLauncher : Job: [FlowJob: [name=importUserJob]] launched with the following parameters: [{run.id=1}]
2019-02-27 09:58:49.559 INFO 85483 --- [ main] o.s.batch.core.job.SimpleStepHandler : Executing step: [step1]
2019-02-27 09:58:49.646 INFO 85483 --- [ main] hello.PersonItemProcessor : Converting (firstName: Jill, lastName: Doe) into (firstName: JILL, lastName: DOE)
2019-02-27 09:58:49.646 INFO 85483 --- [ main] hello.PersonItemProcessor : Converting (firstName: Joe, lastName: Doe) into (firstName: JOE, lastName: DOE)
2019-02-27 09:58:49.646 INFO 85483 --- [ main] hello.PersonItemProcessor : Converting (firstName: Justin, lastName: Doe) into (firstName: JUSTIN, lastName: DOE)
2019-02-27 09:58:49.646 INFO 85483 --- [ main] hello.PersonItemProcessor : Converting (firstName: Jane, lastName: Doe) into (firstName: JANE, lastName: DOE)
2019-02-27 09:58:49.646 INFO 85483 --- [ main] hello.PersonItemProcessor : Converting (firstName: John, lastName: Doe) into (firstName: JOHN, lastName: DOE)
2019-02-27 09:58:49.658 INFO 85483 --- [ main] hello.JobCompletionNotificationListener : !!! JOB FINISHED! Time to verify the results
2019-02-27 09:58:49.661 INFO 85483 --- [ main] hello.JobCompletionNotificationListener : Found <firstName: JILL, lastName: DOE> in the database.
2019-02-27 09:58:49.661 INFO 85483 --- [ main] hello.JobCompletionNotificationListener : Found <firstName: JOE, lastName: DOE> in the database.
2019-02-27 09:58:49.661 INFO 85483 --- [ main] hello.JobCompletionNotificationListener : Found <firstName: JUSTIN, lastName: DOE> in the database.
2019-02-27 09:58:49.661 INFO 85483 --- [ main] hello.JobCompletionNotificationListener : Found <firstName: JANE, lastName: DOE> in the database.
2019-02-27 09:58:49.662 INFO 85483 --- [ main] hello.JobCompletionNotificationListener : Found <firstName: JOHN, lastName: DOE> in the database.
2019-02-27 09:58:49.664 INFO 85483 --- [ main] o.s.b.c.l.support.SimpleJobLauncher : Job: [FlowJob: [name=importUserJob]] completed with the following parameters: [{run.id=1}] and the following status: [COMPLETED]
2019-02-27 09:58:49.668 INFO 85483 --- [ Thread-1] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown initiated...
2019-02-27 09:58:49.673 INFO 85483 --- [ Thread-1] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown completed.
# mbenhassine # localhost in /tmp/gs-batch-processing/complete on git:master x [9:58:49]
$
Is the appropriate way to exit application by adding a System.exit()?
No, you should gracefully shutdown your application to correctly release any resource and avoid resource leak.

#Karate Gatling is not generating report when i hit the endpoint once

My Gatling Simulation class,
class <MyClass> extends Simulation {
before {
println("Simulation is about to start!")
}
val smapleTest = scenario("test").exec(karateFeature("classpath:demo/get-user.feature"))
setUp(
smapleTest.inject(rampUsers(1) over (10 seconds))).maxDuration(1 minutes)
//).assertions(global.responseTime.mean.lt(35))
after {
println("Simulation is finished!")
}
}
My get-user.feature file,
Scenario Outline: Hit wskadmin url
Given http://172.17.0.1:5984/whisk_local_subjects/guest
And header Authorization = AdminAuth
And header Content-Type = 'application/json'
When method get
Then status <stat>
* print result
Examples:
| stat |
| 200 |
When i run the simulation class, below console logs i am getting:
Simulation com.karate.openwhisk.performance.SmokePerformanceTest started...
13:20:48.877 [GatlingSystem-akka.actor.default-dispatcher-5] INFO i.gatling.core.controller.Controller - InjectionStopped expectedCount=1
13:20:49.473 [GatlingSystem-akka.actor.default-dispatcher-4] INFO com.intuit.karate - karate.env system property was: null
13:20:49.525 [GatlingSystem-akka.actor.default-dispatcher-7] INFO com.intuit.karate - [print] I am here in get-user
13:20:49.706 [GatlingSystem-akka.actor.default-dispatcher-4] DEBUG com.intuit.karate - request:
1 > GET http://172.17.0.1:5984/whisk_local_subjects/guest
1 > Accept-Encoding: gzip,deflate
1 > Authorization: Basic d2hpc2tfYWRtaW46c29tZV9wYXNzdzByZA==
1 > Connection: Keep-Alive
1 > Content-Type: application/json
1 > Host: 172.17.0.1:5984
1 > User-Agent: Apache-HttpClient/4.5.5 (Java/1.8.0_144)
13:20:49.741 [GatlingSystem-akka.actor.default-dispatcher-4] DEBUG com.intuit.karate - response time in milliseconds: 34
1 < 200
Note: Here i am getting the response in 34 mili seconds, but gating is unable to generate the report. Below is the error message i am getting
Error:
Generating reports...
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at io.gatling.mojo.MainWithArgsInFile.runMain(MainWithArgsInFile.java:50)
at io.gatling.mojo.MainWithArgsInFile.main(MainWithArgsInFile.java:33)
Caused by: java.lang.UnsupportedOperationException: There were no requests sent during the simulation, reports won't be generated
at io.gatling.charts.report.ReportsGenerator.generateFor(ReportsGenerator.scala:48)
at io.gatling.app.RunResultProcessor.generateReports(RunResultProcessor.scala:76)
at io.gatling.app.RunResultProcessor.processRunResult(RunResultProcessor.scala:55)
at io.gatling.app.Gatling$.start(Gatling.scala:68)
at io.gatling.app.Gatling$.fromArgs(Gatling.scala:45)
at io.gatling.app.Gatling$.main(Gatling.scala:37)
at io.gatling.app.Gatling.main(Gatling.scala)
... 6 more
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 14.199 s
[INFO] Finished at: 2018-07-24T13:20:50+05:30
[INFO] Final Memory: 30M/332M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal io.gatling:gatling-maven-plugin:2.2.4:test (default-cli) on project
openwhisk: Gatling failed.: Process exited with an error: 255 (Exit
value: 255) -> [Help 1]
But if i run the same simulation file simple change in feature file as below
Scenario Outline: Hit wskadmin url
Given http://172.17.0.1:5984/whisk_local_subjects/guest
And header Authorization = AdminAuth
And header Content-Type = 'application/json'
When method get
Then status <stat>
* print result
Examples:
| stat |
| 200 |
| 200 |
Then gatling generates the report.
Please help me someone what is the root cause.
Thank you for your interest in karate-gatling and the very detailed report.
This is a bug, which we have fixed and just made a release for.
Can you upgrade your karate-gatling version to 0.8.0.1 and let me know how it goes ?

CouchDb service not starting on localhost Windows 10

When I check the services running it shows the Apache CouchDb service running, but on hitting localhost:5984/_utils it's showing Site can't be reached.
I tried replacing the nssm.exe in bin folder of couchdb installation directory as specified in https://stackoverflow.com/a/44107335/219187, but with no sucesss.
Iam getting the following error
[error] 2017-11-09T15:52:03.730000Z couchdb#localhost <0.201.0> --------
Supervisor couch_primary_services had child collation_driver started with
couch_drv:start_link() at undefined exit with reason "The specified module
could not be found." in context start_error
[error] 2017-11-09T15:52:03.730000Z couchdb#localhost <0.199.0> --------
Supervisor couch_sup had child couch_primary_services started with
couch_primary_sup:start_link() at undefined exit with reason {shutdown,
{failed_to_start_child,collation_driver,"The specified module could not be
found."}} in context start_error
[error] 2017-11-09T15:52:03.731000Z couchdb#localhost <0.197.0> --------
CRASH REPORT Process (<0.197.0>) with 0 neighbors exited with reason:
{{shutdown,{failed_to_start_child,couch_primary_services,{shutdown,
{failed_to_start_child,collation_driver,"The specified module could not be
found."}}}},{couch_app,start,[normal,[]]}} at
application_master:init/4(line:134) <= proc_lib:init_p_do_apply/3(line:240);
initial_call: {application_master,init,['Argument__1','Argument__2',...]},
ancestors: [<0.196.0>], messages: [{'EXIT',<0.198.0>,normal}], links:
[<0.196.0>,<0.7.0>], dictionary: [], trap_exit: true, status: running,
heap_size: 610, stack_size: 27, reductions: 139
[info] 2017-11-09T15:52:03.731000Z couchdb#localhost <0.7.0> --------
Application couch exited with reason: {{shutdown,
{failed_to_start_child,couch_primary_services,{shutdown,
{failed_to_start_child,collation_driver,"The specified module could not be
found."}}}},{couch_app,start,[normal,[]]}}
[error] 2017-11-09T15:52:07.921000Z couchdb#localhost <0.202.0> --------
CRASH REPORT Process (<0.202.0>) with 0 neighbors exited with reason: "The
specified module could not be found." at gen_server:init_it/6(line:344) <=
proc_lib:init_p_do_apply/3(line:240); initial_call: {couch_drv,init,
['Argument__1']}, ancestors: [couch_primary_services,couch_sup,<0.198.0>],
messages: [], links: [<0.201.0>], dictionary: [], trap_exit: false, status:
running, heap_size: 376, stack_size: 27, reductions: 125
[error] 2017-11-09T15:52:07.922000Z couchdb#localhost <0.198.0> --------
Error starting Apache CouchDB: