Logstash Plugin not getting installed - plugins

I am trying to install the kafka output plugin for logstash 1.5.0.beta1.
I had done it previously using the command
$LS_HOME/bin/plugin install logstash-output-kafka
as given on the logstash website.
But now the installation is giving me the following error:
Clamp::UsageError: No such sub-command 'logstash-output-kafka'
signal_usage_error at /home/madhura/Softwares/logstash-1.5.0.beta1/vendor/bundle/jruby/1.9/gems/clamp-0.6.3/lib/clamp/command.rb:103
find_subcommand_class at /home/madhura/Softwares/logstash-1.5.0.beta1/vendor/bundle/jruby/1.9/gems/clamp-0.6.3/lib/clamp/subcommand/execution.rb:28
instatiate_subcommand at /home/madhura/Softwares/logstash-1.5.0.beta1/vendor/bundle/jruby/1.9/gems/clamp-0.6.3/lib/clamp/subcommand/execution.rb:17
execute at /home/madhura/Softwares/logstash-1.5.0.beta1/vendor/bundle/jruby/1.9/gems/clamp-0.6.3/lib/clamp/subcommand/execution.rb:10
run at /home/madhura/Softwares/logstash-1.5.0.beta1/lib/logstash/runner.rb:144
call at org/jruby/RubyProc.java:271
run at /home/madhura/Softwares/logstash-1.5.0.beta1/lib/logstash/runner.rb:171
call at org/jruby/RubyProc.java:271
initialize at /home/madhura/Softwares/logstash-1.5.0.beta1/vendor/bundle/jruby/1.9/gems/stud-0.0.18/lib/stud/task.rb:12
Kindly help me find the reason and install the plugin

Edit: as of Logstash 1.5.0 Kafka input/output is natively supported, see:
https://www.elastic.co/blog/logstash-kafka-intro
In logstash-1.5.0.beta1 (through 1.5.0.rc2 at least), the Kafka input/output plugins ship with the base logstash install:
.../logstash-1.5.0.beta1$ ./bin/plugin list | grep kafka
logstash-input-kafka (0.1.5)
logstash-output-kafka (0.1.3)

Related

How to get performance of mongodb cluster from logs using mongodb keyhole?

I have installed Mongodb keyhole in my ubuntu server. Iam trying to analyze performance of a MongoDB cluster from the log file using the below command.
keyhole --loginfo log_file[.gz] [--collscan] [-v]
But the problem is iam getting the below error, eventhough the log file is same directory where iam running the command.Anyone please help me on this.
2022/10/12 11:20:45 open logfilename_mongodb.log.gz.[gz]: no such file or directory
I have fixed the issue with the below command format.
./keyhole -loginfo -v ~/Downloads/logfilepath.log
Glancing at the Logs Analytics readme for the project, it looks like you've got a simple syntax issue here. The [] characters are intending to indicate optional arguments/settings to use when running keyhole.
Have you tried a syntax similar to this?
keyhole --loginfo log_file --collscan -v

Bootstrap failed: 5: Input/output error while running any service on macOS Big Sur version 11.5.2

I am trying to run mongodb-community#4.2 service using brew services start mongodb-community#4.2 (facing similar error, while running httpd service or any other service)
Following is the error:
Error: Failure while executing; /bin/launchctl bootstrap gui/502 /Users/chiragsingla/Library/LaunchAgents/homebrew.mxcl.mongodb-community#4.2.plist exited with 5.
There can be multiple reasons behind this error message. So, the first thing to do is to find where your mongo related logs are stored. To do that, run the following command -
sudo find / -name mongod.conf
This will get you the mongo db config file. On running this command, I got /usr/local/etc/mongod.conf. You may find it directly under /etc.
On opening mongod.conf, you will find log path mentioned there. You can open the file itself, or instead get the last 15-20 lines of this log file via tail command -
tail -n 15 <<your mongo db log path>>
Now, you will need to debug the issue mentioned in the logs. Generally, I have seen these three sets of issues -
Permission issue with /tmp/mongodb-27017.sock - While some SO answers asked to change the permissions for this file as a solution, my issue with this only went away after I removed this file.
Compatibility issue - If you see a message like Version incompatibility detected, it means that the mongodb version you have currently installed is different from the version whose data is present on your system. Uninstall the current mongodb version and then install the correct older version (if you don't want to lose the data).
Once you have done it, and your mongo is up and running, and you want to upgrade mongodb version, follow this SO answer.
Permission issues with WiredTiger - Using chmod to change file permissions resolved these.
In case you have any issue other than these three, you will still need to search more on SO and figure it out. Hope this was of some help! :)

GraphQL error when launching KeystoneJS application

I just created a KeystoneJS using yarn create keystone-app my-app.
When I try to run it using yarn dev and browse to it I get the following error:
Error: Cannot use GraphQLSchema "[object GraphQLSchema]" from another module or realm.
Ensure that there is only one instance of "graphql" in the node_modules
directory. If different versions of "graphql" are the dependencies of other
relied on modules, use "resolutions" to ensure only one version is installed.
https://yarnpkg.com/en/docs/selective-version-resolutions
Duplicate "graphql" modules cannot be used at the same time since different
versions may have different capabilities and behavior. The data from one
version used in the function from another could produce confusing and
spurious results.
at instanceOf (/my/home/path/my-first-ks-app/node_modules/#keystonejs/keystone/node_modules/graphql/jsutils/instanceOf.js:28:13)
at isSchema (/my/home/path/my-first-ks-app/node_modules/#keystonejs/keystone/node_modules/graphql/type/schema.js:36:34)
at assertSchema (/my/home/path/my-first-ks-app/node_modules/#keystonejs/keystone/node_modules/graphql/type/schema.js:40:8)
at validateSchema (/my/home/path/my-first-ks-app/node_modules/#keystonejs/keystone/node_modules/graphql/type/validate.js:44:28)
at graphqlImpl (/my/home/path/my-first-ks-app/node_modules/#keystonejs/keystone/node_modules/graphql/graphql.js:79:62)
at /my/home/path/my-first-ks-app/node_modules/#keystonejs/keystone/node_modules/graphql/graphql.js:28:59
at new Promise (<anonymous>)
at graphql (/my/home/path/my-first-ks-app/node_modules/#keystonejs/keystone/node_modules/graphql/graphql.js:26:10)
at _graphQLQuery.<computed> (/my/home/path/my-first-ks-app/node_modules/#keystonejs/keystone/lib/Keystone/index.js:477:7)
at Keystone.executeQuery (/my/home/path/my-first-ks-app/node_modules/#keystonejs/keystone/lib/Keystone/index.js:252:14)
at Object.module.exports [as onConnect] (/my/home/path/my-first-ks-app/initial-data.js:10:22)
at /my/home/path/my-first-ks-app/node_modules/#keystonejs/keystone/lib/Keystone/index.js:323:35
at processTicksAndRejections (internal/process/task_queues.js:97:5)
at async executeDefaultServer (/my/home/path/my-first-ks-app/node_modules/#keystonejs/keystone/bin/utils.js:114:3)
error Command failed with exit code 1.
I am on Windows 10 / WSL (v1) with Ubuntu. KeystoneJS is running from Linux and MongoDB server is installed and running on Windows. This is because when I had it running in Linux, mongod showed as running and listening but I was not able to connect to it (via KeystoneJS or via shell using mongo command).
How do I fix this issue?
I was using graphql#15.0.0 when I got this error.
I fixed it by downgrading to graphql#14.6.0.
I got this issue on a keystone project with apollo.
run this line
rm -rf node_modules/#keystonejs/keystone/node_modules/graphql
or add it to the Dockerfile for building image for production

What causes error "Connection test failed: spawn npm; ENOENT" when creating new Strapi project with MongoDB?

I am trying to create a new Strapi app on Ubuntu 16.4 using MongoDB. After stepping through the tutorial, here: https://strapi.io/documentation/3.0.0-beta.x/guides/databases.html#mongodb-installation, I get the following error: Connection test failed: spawn npm; ENOENT
The error seems obvious, but I'm having issues getting to the cause of it. I've installed latest version of MongoDB and have ensured it is running using service mongod status. I can also connect directly using nc, like below.
$ nc -zvv localhost 27017
Connection to localhost 27017 port [tcp/*] succeeded!
Here is an image of the terminal output:
Any help troubleshooting this would be appreciated! Does Strapi perhaps log setup errors somewhere, or is there a way to get verbose logging? Is it possible the connection error would be logged by MongoDB somewhere?
I was able to find the answer. The problem was with using npx instead of Yarn. Strapi documentation states that either should work, however, it is clear from my experience that there is a bug when using npx.
I switched to Yarn and the process proceeded as expected without error. Steps were otherwise exactly the same.
Update: There is also a typo in Strapi documentation for yarn. They include the word "new" before the project name, which will create a project called new and ignore the project name.
Strapi docs (incorrect):
yarn create strapi-app new my-project
Correct usage, based on my experience:
yarn create strapi-app my-project
The ENOENT error is "an abbreviation of Error NO ENTry (or Error NO ENTity), and can actually be used for more than files/directories."
Why does ENOENT mean "No such file or directory"?
Everything I've read on this points toward issues with environment variables and the process.env.PATH.
"NOTE: This error is almost always caused because the command does not exist, because the working directory does not exist, or from a windows-only bug."
How do I debug "Error: spawn ENOENT" on node.js?
If you take the function that Jiaji Zhou provides in the link above and paste it into the top of your config/functions/bootstrap.js file (above module.exports), it might give you a better idea of where the error is occurring, specifically it should tell you the command it ran. Then run the command > which nameOfCommand to see what file path it returns.
"miss-installed programs are the most common cause for a not found command. Refer to each command documentation if needed and install it." - laconbass (from the same link, below Jiaji Zhou's answer)
This is how I interpret all of the above and form a solution. Put that function in bootstrap.js, then take the command returned from the function and run > which nameOfCommand. Then in bootstrap.js (you can comment out the function), put console.log(process.env.PATH) which will return a string of all the directories your current environment is checking for executables. If the path returned from your which command isn't in your process.env.PATH, you can move the command into a path, or try re-installing.

What is a spark kernel for apache toree?

I have a spark cluster which master is on 192.168.0.60:7077
I used to use jupyter notebook to make some pyspark scripts.
I am now willing to move on to scala.
I don't know scala's world.
I am trying to use Apache Toree.
I installed it, downloaded the scala kernels, and runned it to the point to open a scala notebook . Till there everything seems ok :-/
But I can't find the spark context, and there are errors in the jupyter's server logs :
[I 16:20:35.953 NotebookApp] Kernel started: afb8cb27-c0a2-425c-b8b1-3874329eb6a6
Starting Spark Kernel with SPARK_HOME=/Users/romain/spark
Error: Master must start with yarn, spark, mesos, or local
Run with --help for usage help or --verbose for debug output
[I 16:20:38.956 NotebookApp] KernelRestarter: restarting kernel (1/5)
As I don't know scala, I am not sure of the issue here ?
It could be :
I need a spark kernel (according to https://github.com/ibm-et/spark-kernel/wiki/Getting-Started-with-the-Spark-Kernel )
I need to add an option on the server (the error message says 'Master must start with yarn, spark, mesos, or local' )
or something else :-/
I was just willing to migrate from python to scala, and I spend a few hours lost just on starting up the jupyter IDE :-/
It looks like you are using Spark in a standalone deploy mode. As Tzach suggested in his comment, following should work:
SPARK_OPTS='--master=spark://192.168.0.60:7077' jupyter notebook
SPARK_OPTS expects usual spark-submit parameter list.
If that does not help, you would need to check the SPARK_MASTER_PORT value in conf/spark-env.sh (7077 is the default).