CouchDb service not starting on localhost Windows 10 - service

When I check the services running it shows the Apache CouchDb service running, but on hitting localhost:5984/_utils it's showing Site can't be reached.
I tried replacing the nssm.exe in bin folder of couchdb installation directory as specified in https://stackoverflow.com/a/44107335/219187, but with no sucesss.
Iam getting the following error
[error] 2017-11-09T15:52:03.730000Z couchdb#localhost <0.201.0> --------
Supervisor couch_primary_services had child collation_driver started with
couch_drv:start_link() at undefined exit with reason "The specified module
could not be found." in context start_error
[error] 2017-11-09T15:52:03.730000Z couchdb#localhost <0.199.0> --------
Supervisor couch_sup had child couch_primary_services started with
couch_primary_sup:start_link() at undefined exit with reason {shutdown,
{failed_to_start_child,collation_driver,"The specified module could not be
found."}} in context start_error
[error] 2017-11-09T15:52:03.731000Z couchdb#localhost <0.197.0> --------
CRASH REPORT Process (<0.197.0>) with 0 neighbors exited with reason:
{{shutdown,{failed_to_start_child,couch_primary_services,{shutdown,
{failed_to_start_child,collation_driver,"The specified module could not be
found."}}}},{couch_app,start,[normal,[]]}} at
application_master:init/4(line:134) <= proc_lib:init_p_do_apply/3(line:240);
initial_call: {application_master,init,['Argument__1','Argument__2',...]},
ancestors: [<0.196.0>], messages: [{'EXIT',<0.198.0>,normal}], links:
[<0.196.0>,<0.7.0>], dictionary: [], trap_exit: true, status: running,
heap_size: 610, stack_size: 27, reductions: 139
[info] 2017-11-09T15:52:03.731000Z couchdb#localhost <0.7.0> --------
Application couch exited with reason: {{shutdown,
{failed_to_start_child,couch_primary_services,{shutdown,
{failed_to_start_child,collation_driver,"The specified module could not be
found."}}}},{couch_app,start,[normal,[]]}}
[error] 2017-11-09T15:52:07.921000Z couchdb#localhost <0.202.0> --------
CRASH REPORT Process (<0.202.0>) with 0 neighbors exited with reason: "The
specified module could not be found." at gen_server:init_it/6(line:344) <=
proc_lib:init_p_do_apply/3(line:240); initial_call: {couch_drv,init,
['Argument__1']}, ancestors: [couch_primary_services,couch_sup,<0.198.0>],
messages: [], links: [<0.201.0>], dictionary: [], trap_exit: false, status:
running, heap_size: 376, stack_size: 27, reductions: 125
[error] 2017-11-09T15:52:07.922000Z couchdb#localhost <0.198.0> --------
Error starting Apache CouchDB:

Related

Can't install and use Jhipster Registry 7.4.0 because hazelcast and logs properties are left bound

I try to use Jhipster Registry 7.4.0 for my new application. At this time, I just want to start it in default configuration.
In my application and application-dev properties files, I have (in both files):
...
jhipster:
async:
core-pool-size: 2
max-pool-size: 50
queue-capacity: 10000
cache:
hazelcast:
time-to-live-seconds: 3600
backup-count: 1
management-center:
enabled: false
update-interval: 3
url: http://localhost:8180/mancenter
security:
authentication:
jwt:
base64-secret: <MY KEY>
metrics:
logs:
enabled: false
report-frequency: 60
...
When I start my registry, it's failed because logs and hazelcast are left bound :
2022-11-17T11:42:40.901+01:00 INFO 11900 --- [ main] t.jhipster.registry.JHipsterRegistryApp : The following 4 profiles are active: "composite", "dev", "api-docs", "swagger"
2022-11-17T11:42:44.401+01:00 WARN 11900 --- [ main] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.context.ApplicationContextException: Unable to start web server; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'undertowServletWebServerFactory' defined in class path resource [org/springframework/boot/autoconfigure/web/servlet/ServletWebServerFactoryConfiguration$EmbeddedUndertow.class]: Initialization of bean failed; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'webConfigurer' defined in URL [jar:file:/C:/Users/myuser/Desktop/JHIPSTER_REGISTRY-7.4.0/jhipster-registry-7.4.0.jar!/BOOT-INF/classes!/tech/jhipster/registry/config/WebConfigurer.class]: Unsatisfied dependency expressed through constructor parameter 1; nested exception is org.springframework.boot.context.properties.ConfigurationPropertiesBindException: Error creating bean with name 'jhipster-tech.jhipster.config.JHipsterProperties': Could not bind properties to 'JHipsterProperties' : prefix=jhipster, ignoreInvalidFields=false, ignoreUnknownFields=false; nested exception is org.springframework.boot.context.properties.bind.BindException: Failed to bind properties under 'jhipster' to tech.jhipster.config.JHipsterProperties
2022-11-17T11:42:44.489+01:00 ERROR 11900 --- [ main] o.s.b.d.LoggingFailureAnalysisReporter : ____***************************__APPLICATION FAILED TO START__***************************____Description:____Binding to target [Bindable#9a2ec9b type = tech.jhipster.config.JHipsterProperties, value = 'provided', annotations = array<Annotation>[#org.springframework.boot.context.properties.ConfigurationProperties(ignoreInvalidFields=false, ignoreUnknownFields=false, prefix="jhipster", value="jhipster")]] failed:____ Property: jhipster.metrics.logs.enabled__ Value: "false"__ Origin: URL [file:config/application-dev.yml] - 88:22__ Reason: The elements [jhipster.metrics.logs.enabled,jhipster.metrics.logs.report-frequency] were left unbound.__ Property: jhipster.metrics.logs.report-frequency__ Value: "60"__ Origin: URL [file:config/application-dev.yml] - 89:31__ Reason: The elements [jhipster.metrics.logs.enabled,jhipster.metrics.logs.report-frequency] were left unbound.____Action:____Update your application's configuration__
Do you know why this properties ar not found ? I havec check sample on gitub registry depo ans configuration seems to be identical ...

NullPointerException: ha.BootstrapStandby.run(BootstrapStandby.java:549)

$HADOOP_HOME/bin/hdfs --config $HADOOP_CONF_DIR namenode -bootstrapStandby
getting below error in Hadoop HA 3.3.3 on standbyNode for the above command.
{"name":"org.apache.hadoop.hdfs.server.namenode.NameNode","time":1659853529569,"date":"2022-08-07 06:25:29,569","level":"INFO","thread":"main","message":"registered UNIX signal handlers for [TERM, HUP, INT]"}
{"name":"org.apache.hadoop.hdfs.server.namenode.NameNode","time":1659853529682,"date":"2022-08-07 06:25:29,682","level":"INFO","thread":"main","message":"createNameNode [-bootstrapStandby]"}
{"name":"org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby","time":1659853530068,"date":"2022-08-07 06:25:30,068","level":"INFO","thread":"main","message":"Found nn: apache-hadoop-namenode-0.apache-hadoop-namenode.backend.svc.cluster.local, ipc: hdfs:8020"}
{"name":"org.apache.hadoop.hdfs.server.namenode.NameNode","time":1659853530069,"date":"2022-08-07 06:25:30,069","level":"ERROR","thread":"main","message":"Failed to start namenode.","exceptionclass":"java.io.IOException","stack":["java.io.IOException: java.lang.NullPointerException","\tat org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.run(BootstrapStandby.java:549)","\tat org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1741)","\tat org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1834)","Caused by: java.lang.NullPointerException","\tat org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.parseConfAndFindOtherNN(BootstrapStandby.java:435)","\tat org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.run(BootstrapStandby.java:114)","\tat org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:81)","\tat org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:95)","\tat org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.run(BootstrapStandby.java:544)","\t... 2 more"]}{"name":"org.apache.hadoop.util.ExitUtil","time":1659853530074,"date":"2022-08-07 06:25:30,074","level":"INFO","thread":"main","message":"Exiting with status 1: java.io.IOException: java.lang.NullPointerException"}
{"name":"org.apache.hadoop.hdfs.server.namenode.NameNode","time":1659853530083,"date":"2022-08-07 06:25:30,083","level":"INFO","thread":"shutdown-hook-0","message":"SHUTDOWN_MSG: \n\nSHUTDOWN_MSG: Shutting down NameNode at apache-hadoop-namenode-1.apache-hadoop-namenode.backend.svc.cluster.local"}

pyflink with kafka java.lang.RuntimeException: Failed to create stage bundle factory

・Python3.8
・JDK 11
I've started learning pyflink and write a code instructed by official web which is https://nightlies.apache.org/flink/flink-docs-master/docs/dev/python/datastream/intro_to_datastream_api/
And here is my code
from pyflink.common.serialization import JsonRowDeserializationSchema,JsonRowSerializationSchema
from pyflink.common import WatermarkStrategy, Row
from pyflink.common.serialization import Encoder
from pyflink.common.typeinfo import Types
from pyflink.datastream import StreamExecutionEnvironment
from pyflink.datastream.connectors import FlinkKafkaConsumer,FlinkKafkaProducer
def streaming():
env = StreamExecutionEnvironment.get_execution_environment()
deserialization_schema =JsonRowDeserializationSchema.builder().type_info(
type_info=Types.ROW([Types.INT(), Types.STRING()])).build()
kafka_consumer = FlinkKafkaConsumer(
topics='test',
deserialization_schema=deserialization_schema,
properties={'bootstrap.servers': 'localhost:9092','group.id': 'test_group'})
ds = env.add_source(kafka_consumer)
ds = ds.map(lambda a: Row(a % 4, 1),
output_type=Types.ROW([Types.LONG(), Types.LONG()])) \
.key_by(lambda a: a[0]) \
.reduce(lambda a, b: Row(a[0], a[1] + b[1]))
serialization_schema = JsonRowSerializationSchema.builder().with_type_info(
type_info=Types.ROW([Types.LONG(), Types.LONG()])).build()
kafka_sink = FlinkKafkaProducer(
topic='test_sink_topic',
serialization_schema=serialization_schema,
producer_config={'bootstrap.servers': 'localhost:9092',
'group.id': 'test_group'})
ds.add_sink(kafka_sink)
env.execute('datastream_api_demo')
if __name__ == '__main__':
streaming()
Firstly it said to me to specify jarfile. So I downloaded flink-connector-kafka and kafka-clients jarfile for each from https://mvnrepository.com/artifact/org.apache.flink and put them into pyflink/lib directory.
And now I'm at next step getting this error;
(pyflink_demo) C:\work\pyflink_demo>python Kafka_stream_Kafka.py
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.flink.api.java.ClosureCleaner (file:/C:/work/pyflink_demo/Lib/site-packages/pyflink/lib/flink-dist_2.11-1.14.4.jar) to field java.util.P
roperties.serialVersionUID
WARNING: Please consider reporting this to the maintainers of org.apache.flink.api.java.ClosureCleaner
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Traceback (most recent call last):
File "Kafka_stream_Kafka.py", line 38, in <module>
streaming()
File "Kafka_stream_Kafka.py", line 33, in streaming
env.execute('datastream_api_demo')
File "C:\work\pyflink_demo\lib\site-packages\pyflink\datastream\stream_execution_environment.py", line 691, in execute
return JobExecutionResult(self._j_stream_execution_environment.execute(j_stream_graph))
File "C:\work\pyflink_demo\lib\site-packages\py4j\java_gateway.py", line 1285, in __call__
return_value = get_return_value(
File "C:\work\pyflink_demo\lib\site-packages\pyflink\util\exceptions.py", line 146, in deco
return f(*a, **kw)
File "C:\work\pyflink_demo\lib\site-packages\py4j\protocol.py", line 326, in get_return_value
raise Py4JJavaError(
py4j.protocol.Py4JJavaError: An error occurred while calling o0.execute.
: org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
at org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
at org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:137)
at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:642)
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
at org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$1(AkkaInvocationHandler.java:258)
at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
at org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1389)
at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93)
at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92)
at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
at org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$1.onComplete(AkkaFutureUtils.java:47)
at akka.dispatch.OnComplete.internal(Future.scala:300)
at akka.dispatch.OnComplete.internal(Future.scala:297)
at akka.dispatch.japi$CallbackBridge.apply(Future.scala:224)
at akka.dispatch.japi$CallbackBridge.apply(Future.scala:221)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
at org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$DirectExecutionContext.execute(AkkaFutureUtils.java:65)
at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68)
at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:284)
at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:284)
at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:284)
at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:621)
at akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:24)
at akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:23)
at scala.concurrent.Future.$anonfun$andThen$1(Future.scala:532)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:63)
at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:100)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:81)
at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:100)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:49)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:48)
at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)
Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed by NoRestartBackoffTimeStrategy
at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:138)
at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:82)
at org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:252)
at org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:242)
at org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:233)
at org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:684)
at org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:79)
at org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:444)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRpcInvocation$1(AkkaRpcActor.java:316)
at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:314)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:217)
at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:78)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:163)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20)
at scala.PartialFunction.applyOrElse(PartialFunction.scala:123)
at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122)
at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
at akka.actor.Actor.aroundReceive(Actor.scala:537)
at akka.actor.Actor.aroundReceive$(Actor.scala:535)
at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:220)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:580)
at akka.actor.ActorCell.invoke(ActorCell.scala:548)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270)
at akka.dispatch.Mailbox.run(Mailbox.scala:231)
at akka.dispatch.Mailbox.exec(Mailbox.scala:243)
... 5 more
Caused by: java.lang.RuntimeException: Failed to create stage bundle factory! INFO:root:Initializing Python harness: C:\work\pyflink_demo\lib\site-packages\pyflink\fn_execution\beam\bea
m_boot.py --id=4-1 --provision_endpoint=localhost:51794
INFO:root:Starting up Python harness in loopback mode.
at org.apache.flink.streaming.api.runners.python.beam.BeamPythonFunctionRunner.createStageBundleFactory(BeamPythonFunctionRunner.java:566)
at org.apache.flink.streaming.api.runners.python.beam.BeamPythonFunctionRunner.open(BeamPythonFunctionRunner.java:255)
at org.apache.flink.streaming.api.operators.python.AbstractPythonFunctionOperator.open(AbstractPythonFunctionOperator.java:131)
at org.apache.flink.streaming.api.operators.python.AbstractOneInputPythonFunctionOperator.open(AbstractOneInputPythonFunctionOperator.java:116)
at org.apache.flink.streaming.api.operators.python.PythonProcessOperator.open(PythonProcessOperator.java:59)
at org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:110)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:711)
at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.call(StreamTaskActionExecutor.java:100)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:687)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:654)
at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958)
at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:927)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:766)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:575)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.UncheckedExecutionException: java.lang.IllegalStateException: Process died with exit code 0
at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2050)
at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.get(LocalCache.java:3952)
at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3974)
at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4958)
at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4964)
at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.<init>(DefaultJobBundleFactory.java:451)
at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.<init>(DefaultJobBundleFactory.java:436)
at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory.forStage(DefaultJobBundleFactory.java:303)
at org.apache.flink.streaming.api.runners.python.beam.BeamPythonFunctionRunner.createStageBundleFactory(BeamPythonFunctionRunner.java:564)
... 14 more
Caused by: java.lang.IllegalStateException: Process died with exit code 0
at org.apache.beam.runners.fnexecution.environment.ProcessManager$RunningProcess.isAliveOrThrow(ProcessManager.java:75)
at org.apache.beam.runners.fnexecution.environment.ProcessEnvironmentFactory.createEnvironment(ProcessEnvironmentFactory.java:112)
at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$1.load(DefaultJobBundleFactory.java:252)
at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$1.load(DefaultJobBundleFactory.java:231)
at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3528)
at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2277)
at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2154)
at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2044)
... 22 more
I tried to figure out what's going on and found very similar question What's wrong with my Pyflink setup that Python UDFs throw py4j exceptions?
It says that was caused by network proxy problem. JVM and python uses local socket communication. So set local communication with no proxy.
I set environment valuable "no_proxy" but it doesn't work.
enter image description here
Could anyone provide solution for this?
There is no useful information in the exception stack to help to identify the problem. This should be caused by a known issue(FLINK-26543, already solved, however still not released). This issue only occurs in loopback mode which is enabled by default when executing the job locally.
For now, you could try to force the job run in process mode instead of loopback mode by setting environment variable _python_worker_execution_mode to process. After doing this, you should see the root cause of the failure.
Besides, there is also a small issue in your code. I guess you meant ds.map(lambda a: Row(a[0] % 4, 1), output_type=Types.ROW([Types.LONG(), Types.LONG()])) instead of ds.map(lambda a: Row(a % 4, 1), output_type=Types.ROW([Types.LONG(), Types.LONG()])) as it doesn't support % operation in Row object.
I have tried the script. I am not quite sure what caused the error. Try to start kafka first and create the topics, before running the script. Or start kafka and run the script a second time after first failure.

Elasticsearch cluster: java.lang.IllegalStateException: Future got interrupted

After a restart, our Elasticsearch cluster is showing this in the logs:
[2018-08-30T14:13:15,111][ERROR][o.e.x.m.c.c.ClusterStatsCollector] [myelastic01] collector [cluster_stats] failed to collect data
java.lang.IllegalStateException: Future got interrupted
at org.elasticsearch.common.util.concurrent.FutureUtils.get(FutureUtils.java:53) ~[elasticsearch-6.3.2.jar:6.3.2]
at org.elasticsearch.action.support.AdapterActionFuture.actionGet(AdapterActionFuture.java:34) ~[elasticsearch-6.3.2.jar:6.3.2]
(...)
Caused by: java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998) ~[?:1.8.0_181]
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) ~[?:1.8.0_181]
(...)
[2018-08-30T14:15:22,972][ERROR][o.e.x.m.c.i.IndexStatsCollector] [myelastic01] collector [index-stats] failed to collect data
org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];
at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:166) ~[elasticsearch-6.3.2.jar:6.3.2]
at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.checkGlobalBlock(TransportIndicesStatsAction.java:68) ~[elasticsearch-6.3.2.jar:6.3.2]
at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.checkGlobalBlock(TransportIndicesStatsAction.java:45) ~[elasticsearch-6.3.2.jar:6.3.2]
at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$AsyncAction.<init>(TransportBroadcastByNodeAction.java:255) ~[elasticsearch-6.3.2.j
ar:6.3.2]
[2018-08-30T14:18:50,307][WARN ][r.suppressed ] path: /.kibana/_search, params: {ignore_unavailable=true, index=.kibana, filter_path=aggregations.types.buckets}
org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed
at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:288) ~[elasticsearch-6.3.2.jar:6.3.2]
at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:128) ~[elasticsearch-6.3.2.jar:6.3.2]
at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseDone(AbstractSearchAsyncAction.java:249) ~[elasticsearch-6.3.2.jar:6.3.2]
(...)
[2018-08-30T14:18:50,304][WARN ][r.suppressed ] path: /.kibana/doc/config%3A6.3.2, params: {index=.kibana, id=config:6.3.2, type=doc}
org.elasticsearch.action.NoShardAvailableActionException: No shard available for [get [.kibana][doc][config:6.3.2]: routing [null]]
at org.elasticsearch.action.support.single.shard.TransportSingleShardAction$AsyncSingleAction.perform(TransportSingleShardAction.java:207) ~[elasticsearch-6.3.2.jar:
6.3.2]
at org.elasticsearch.action.support.single.shard.TransportSingleShardAction$AsyncSingleAction.start(TransportSingleShardAction.java:186) ~[elasticsearch-6.3.2.jar:6.
3.2]
at org.elasticsearch.action.support.single.shard.TransportSingleShardAction.doExecute(TransportSingleShardAction.java:95) ~[elasticsearch-6.3.2.jar:6.3.2]
(...)
This is a ELK stack which was working before, the configuration has not been modified. What could be the cause? I've searched around but found nothing relevant.

kubernetes-scdf error: [io-9393-exec-10] o.s.c.d.s.k.AbstractKubernetesDeployer : An error has occurred

I created a spring-cloud-dataflow pod on my own k8s cluster . And the error occured when I deploy a stream on the scdf.
Logs are the below :
2017-02-01 15:04:56.121 ERROR 1 --- [nio-9393-exec-5] o.s.c.d.s.k.AbstractKubernetesDeployer : An error has occurred.
io.fabric8.kubernetes.client.KubernetesClientException: An error has occurred.
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:57) ~[kubernetes-client-1.4.14.jar!/:na]
......
2017-02-04T08:01:19.721241000Z Caused by: java.net.ConnectException: Failed to connect to kubernetes.default.svc/10.254.0.1:443
2017-02-04T08:01:19.721387000Z at okhttp3.internal.io.RealConnection.connectSocket(RealConnection.java:187) ~[okhttp-3.3.1.jar!/:na]
2017-02-04T08:01:19.721537000Z at okhttp3.internal.io.RealConnection.buildConnection(RealConnection.java:170) ~[okhttp-3.3.1.jar!/:na]
2017-02-04T08:01:19.721684000Z at okhttp3.internal.io.RealConnection.connect(RealConnection.java:111) ~[okhttp-3.3.1.jar!/:na]
2017-02-04T08:01:19.721883000Z at okhttp3.internal.http.StreamAllocation.findConnection(StreamAllocation.java:187) ~[okhttp-3.3.1.jar!/:na]
2017-02-04T08:01:19.722037000Z at okhttp3.internal.http.StreamAllocation.findHealthyConnection(StreamAllocation.java:123) ~[okhttp-3.3.1.jar!/:na]
... 77 common frames omitted
2017-02-01 15:04:56.121 WARN 1 --- [nio-9393-exec-5] o.s.c.d.s.c.StreamDeploymentController : Exception when deploying the app StreamAppDefinition [streamName=ticktock, name=time, registeredAppName=time, properties={spring.cloud.stream.bindings.output.producer.requiredGroups=ticktock, spring.cloud.stream.bindings.output.destination=ticktock.time}]: An error has occurred.