neo4j local cluster start failed - deployment

I am trying to set up a local cluster in neo4j and following this tutorial (https://neo4j.com/docs/operations-manual/current/tutorial/local-causal-cluster/#tutorial-local-cluster-configure-cores). I have downloaded the latest enterprise version 4.1.1 and have followed all the steps mentioned in the tutorial. However I am confused at one step where it is mentioned that we should move to bin directory and run command "neo4j start" . When I run this command it says neo4j service not found so I searched about it and found that I should install neo4j as a service first then the start command will start the service. When I install it the service will run for the core-01 instance, following the tutorial for core-02 and core-03 I am supposed to start neo4j also but again the start command will not work unless I install it.
Following this am I supposed to install 3 different services or it is supposed to be a single service for the whole cluster of 3 instances? If single then the neo4j service will always points to the core-01 instance.
Skipping start command If I run the command neo4j console in the bin directory of 3 instances then I get this error in all three consoles
How I am supposed to handle this?? Am I not setting neo4j service properly or there is some issue in configuration ?

Related

Unable to start confluent local services

I'm using http://packages.confluent.io/archive/7.3/ > confluent-7.3.0.zip
confluent local services start
getting the error below even though the file is there. Please help!
Using CONFLUENT_CURRENT: C:\Users\user\AppData\Local\Temp\confluent.727246
Starting ZooKeeper
Error: exec: "D:\\Development\\tools\\confluent\\bin\\zookeeper-server-start": file does not exist
bin\zookeeper-server-start.sh script is intended for Unix execution.
Confluent Platform does not support Windows
If you use WSL2, it may work there. How to install Kafka on Windows?

Lens (K8s client) give error: exec: executable oci not found

I spent several hours looking at this Lens error for K8s. I installed Python, OCI-CLI for Windows 10 (I downloaded oci-cli offline installation, and run python install.py) and configured cluster access. Using CMD works ok:
kubectl command works fine, even get pods command works
But using Lens it gives me the error when connecting
Error getting Credentials: exec: executable oci not found
What am I missing?
I finally found the solution, it was to download kubectl.exe
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/windows/amd64/kubectl.exe
I put it in a folder on the disk, example c:\kubenetes
add that folder to the PATH environment variable.
Restart the PC. Without reboot it didn't work.

How to setup Apache Atlas using embedded cassandra and Apache Solr

Step 1: Clone the repository.
git clone https://github.com/apache/atlas
Step 2: Generated tar file by executing below command
mvn clean -DskipTests package -Pdist,embedded-cassandra-solr
Step 3: Once the build is successful, extracted ‘apache-atlas-3.0.0-SNAPSHOT-server.tar’ file and executed below command.
.\bin\atlas_start.py
Seen below messages in console.
Starting Atlas server on host: localhost
Starting Atlas server on port: 21000
......................
Apache Atlas Server started!!!
But When I hit the url 'http://localhost:21000/', I am getting service unavailable message.
HTTP ERROR 503 Service Unavailable
URI: /
STATUS: 503
MESSAGE: Service Unavailable
SERVLET: -
Log files are empty, not sure how to identify the issue.
Couple of Questions
a. Do I need to explicitly setup cassandra and Apache solr for emebdded mode too? In that case please suggest me a documentation.
b. even though I generated the build using embedded cassandra file, while launching the application, it was still lokking for HADOOP_HOME property. Can I know the reason for this?
I got the same problem and, after a while, I found that Zookeeper doesn't start at all; so, I stopped the Zookeeper service and restarted the installation of atlas. (Here is the link of the installation that I followed : https://manjitsingh664.medium.com/apache-atlas-installation-guide-9098df98d5c3.)
For your case, replace:
mvn clean -DskipTests package -Pdist,embedded-hbase-solr
with:
mvn clean -DskipTests package -Pdist,embedded-cassandra-solr

Not able to access Fuse Admin console

I have started server using fuse_home/bin/start. Server is appearing as started but not able to access Admin console.
It is showing hawtio screen instead.
Appreciate any direction.
Try creating a fabric first, by using the fabric:create command.
Fuse can be started in two ways one is using ./fuse script from bin dir
Another starting in background ./start and connect using ./client
./client connect to running fuse karaf console
[kkakarla#kkakarla bin]$ ./start
[kkakarla#kkakarla bin]$ ./status
Running ...
[kkakarla#kkakarla bin]$ ./client
Logging in as admin
Open a browser to http://localhost:8181 to access the management console
Create a new Fabric via 'fabric:create'
or join an existing Fabric via 'fabric:join [someUrls]'
Hit '' or 'osgi:shutdown' to shutdown JBoss Fuse.
JBossFuse:admin#root>
Please make sure you uncommented last line user.properties file before starting the fuse server. Fuse provides access to fuse management console using HOWTIO. Since you are starting fuse by using /.start you may not be able to create fabric directly until you connect to client, in that case you can connect to fuse server using SSH #localhost -p8101, then you can run fabric: create. Otherwise you can /.stop the server and start using ./fuse or ./karaf then you can run fabric commands directly. If still issue not resolved we have to check logs.

How to start up a Kubernetes cluster using Rocket?

I'm using a Chromebook Pixel 2, and it's easier to get Rocket working than Docker. I recently installed Rocket 1.1 into /usr/local/bin, and have a clone of the Kubernetes GitHub repo.
When I try to use ./hack/local-up-cluster.sh to start a cluster, it eventually fails with this message:
Failed to successfully run 'docker ps', please verify that docker is installed and $DOCKER_HOST is set correctly.
According to the docs, k8s supports Rocket. Can someone please guide me about how to start a local cluster without a working Docker installation?
Thanks in advance.
You need to set three environment variables before running ./hack/local-up-cluster.h:
$ export CONTAINER_RUNTIME=rkt
$ export RKT_PATH=$PATH_TO_RKT_BINARY
$ export RKT_STAGE1_IMAGE=PATH=$PATH_TO_STAGE1_IMAGE
This is described in the docs for getting started with a local rkt cluster.
Try running export CONTAINER_RUNTIME="rocket" and then re-running the script.