fuse 6.3 container-list command thrown nullpointerexception - jbossfuse

In fabric ensemble setup, one fuse server has joined successfully using fabric:join command.
After that if we run container-list command it throws below error and in cli it displayed "error executing command: java.lang.nullpointerexception".
(log file)
io.fabric8.api.FabricException: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /fabric/authentication/crypt/algorithm

Related

VBoxNetAdpCtl: Error while adding new interface: failed to open /dev/vboxnetctl: No such file or directory

I am trying to install kubeadm and for this i am trying to create vagrant environment i clone this link "https://github.com/kodekloudhub/certified-kubernetes-administrator-course" to my server and then run the command "vagrant up". I take this error.I am using Ubuntu 20.04.5 LTS
==> kubemaster: Clearing any previously set network interfaces...
There was an error while executing VBoxManage, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.
Command: ["hostonlyif", "create"]
Stderr: 0%...
Progress state: NS_ERROR_FAILURE
VBoxManage: error: Failed to create the host-only adapter
VBoxManage: error: VBoxNetAdpCtl: Error while adding new interface: failed to open /dev/vboxnetctl: No such file or directory
VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component HostNetworkInterfaceWrap, interface IHostNetworkInterface
VBoxManage: error: Context: "RTEXITCODE handleCreate(HandlerArg*)" at line 95 of file VBoxManageHostonly.cpp
i want to create vagrant environment
This problem was solved by downgrading the "VirtualBox" version. Try to install version 6.38.

Apache Beam Spark Portable Runner

I am running a sample pipeline and my environment is this.
python "SaiStudy - Apache-Beam-Spark.py" --runner=PortableRunner --job_endpoint=192.168.99.102:8099
My Spark is running on a Docker Container and I can see that the JobService is running at 8099.
I am getting the following error:
grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "failed to connect to all addresses"
debug_error_string = "{"created":"#1603539936.536000000","description":"Failed to pick subchannel","file":"src/core/ext/filters/client_channel/client_chann
el.cc","file_line":4090,"referenced_errors":[{"created":"#1603539936.536000000","description":"failed to connect to all addresses","file":"src/core/ext/filters/cli
ent_channel/lb_policy/pick_first/pick_first.cc","file_line":394,"grpc_status":14}]}"
When I curl to ip:port, I can see the following error from the docker logs
Oct 24, 2020 11:34:50 AM org.apache.beam.vendor.grpc.v1p26p0.io.grpc.netty.NettyServerTransport notifyTerminated
INFO: Transport failed
org.apache.beam.vendor.grpc.v1p26p0.io.netty.handler.codec.http2.Http2Exception: Unexpected HTTP/1.x request: GET /
at org.apache.beam.vendor.grpc.v1p26p0.io.netty.handler.codec.http2.Http2Exception.connectionError(Http2Exception.java:103)
at org.apache.beam.vendor.grpc.v1p26p0.io.netty.handler.codec.http2.Http2ConnectionHandler$PrefaceDecoder.readClientPrefaceString(Http2ConnectionHandler.java:302)
at org.apache.beam.vendor.grpc.v1p26p0.io.netty.handler.codec.http2.Http2ConnectionHandler$PrefaceDecoder.decode(Http2ConnectionHandler.java:239)
at org.apache.beam.vendor.grpc.v1p26p0.io.netty.handler.codec.http2.Http2ConnectionHandler.decode(Http2ConnectionHandler.java:438)
at org.apache.beam.vendor.grpc.v1p26p0.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:505)
at org.apache.beam.vendor.grpc.v1p26p0.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:444)
at org.apache.beam.vendor.grpc.v1p26p0.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:283)
at org.apache.beam.vendor.grpc.v1p26p0.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
at org.apache.beam.vendor.grpc.v1p26p0.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
at org.apache.beam.vendor.grpc.v1p26p0.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
at org.apache.beam.vendor.grpc.v1p26p0.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1422)
at org.apache.beam.vendor.grpc.v1p26p0.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
at org.apache.beam.vendor.grpc.v1p26p0.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
at org.apache.beam.vendor.grpc.v1p26p0.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:931)
at org.apache.beam.vendor.grpc.v1p26p0.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
at org.apache.beam.vendor.grpc.v1p26p0.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:700)
at org.apache.beam.vendor.grpc.v1p26p0.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:635)
at org.apache.beam.vendor.grpc.v1p26p0.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:552)
at org.apache.beam.vendor.grpc.v1p26p0.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:514)
at org.apache.beam.vendor.grpc.v1p26p0.io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1044)
at org.apache.beam.vendor.grpc.v1p26p0.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at org.apache.beam.vendor.grpc.v1p26p0.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Help Please.
Please find instruction here how to setup PortableRunner for Spark:
https://beam.apache.org/documentation/runners/spark/
Basically you need to setup additional Docker container (as described) which acts as runner between Beam (in any language) and Spark.
You connect Beam to runner, and runner to the Spark.

ERROR Exiting Kafka due to fatal exception (kafka.Kafka$) on Windows - Apache Kafka

I'm getting below error while starting the Kafka-Server on Windows machine. I've downloaded Scala 2.11 - kafka_2.11-2.1.0.tgz from the link: https://kafka.apache.org/downloads and I did the following steps:
Go to config folder in Apache Kafka (C:\Apache-Kafka\kafka_2.11-2.1.0\config) and edit “server.properties” using any text editor.
Find log.dirs and repelace after “=/tmp/kafka-logs” to C:\Apache-Kafka\kafka_2.11-2.1.0\kafka-logs.
Now simply start the server:
>kafka-server-start.bat C:\Apache-Kafka\kafka_2.11-2.1.0\config
Error:
C:\Apache-Kafka\kafka_2.11-2.1.0\bin\windows>kafka-server-start.bat C:\Apache-Kafka\kafka_2.11-2.1.0\config
[2018-12-14 21:09:34,566] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2018-12-14 21:09:34,583] ERROR Exiting Kafka due to fatal exception (kafka.Kafka$)
java.nio.file.AccessDeniedException: C:\Apache-Kafka\kafka_2.11-2.1.0\config
at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
at sun.nio.fs.WindowsFileSystemProvider.newByteChannel(WindowsFileSystemProvider.java:230)
at java.nio.file.Files.newByteChannel(Files.java:361)
at java.nio.file.Files.newByteChannel(Files.java:407)
at java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:384)
at java.nio.file.Files.newInputStream(Files.java:152)
at org.apache.kafka.common.utils.Utils.loadProps(Utils.java:560)
at kafka.Kafka$.getPropsFromArgs(Kafka.scala:42)
at kafka.Kafka$.main(Kafka.scala:58)
at kafka.Kafka.main(Kafka.scala)
C:\Apache-Kafka\kafka_2.11-2.1.0\bin\windows>
Note: I've already setup Apache Zookeeper on my Windows machine and it's running on port 2181.
I run the command using run as administrator.
Try this after kafka-server-start.bat
use this: ....\config\server.properties with slash between 2 dots
in my case was
In general we must not use C: drive to store kafka-logs. You can try using the drive other than the C: for storing Kafka logs. It must work.
Change the property log.dirs={Drive other than C:}/tmp/kafka-logs present in KafkaHome/config/server.properties.

Cannot create service using bluemix CLI

I Tried creating cloudantNoSQLDB using below command
$ bx service create cloudantNoSQLDB Lite myCloudantDB
and it returns the below output with error
Invoking 'cf create-service cloudantNoSQLDB Lite myCloudantDB'...
Creating service instance myCloudantDB in org bluemixfun.net / space
WH as pravvyas#in.ibm.com... FAILED Server error, status code: 502,
error code: 10001, message: Service broker error: {"description"=>"The
instance cannot be provisioned in the environment requested."}
pravvyas#pravvyas MINGW64 ~/nodejs-cloudant (master)

Running Kafka kafka_2.11-0.9.0.0 on Windows

I am trying to tun the latest version of Kafka (kafka_2.11-0.9.0.0) on windows. I did change the log and tmp directories in all the config files and when I try to start the zookeeper server and getting the below error:
[2015-12-20 18:43:10,826] ERROR Unexpected exception, exiting
abnormally (org.apache.zookeeper.server.ZooKeeperServerMain)
java.io.IOException: Unable to create data directory C:kafka-runtime
mp\version-2
at org.apache.zookeeper.server.persistence.FileTxnSnapLog.(FileTxnSnapLog.java:85)
at org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:104)
at org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:86)
at org.apache.zookeeper.server.ZooKeeperServerMain.main(ZooKeeperServerMain.java:52)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:116)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78)
C:kafka-runtime is the directory that I have created C:\kafka-runtime, here I can see that "\" is being removed.