Using storm UI we can find a supervisor for a particular active topology but how do we find the same information using only command line?
I'm not sure if the CLI has a command that lets you get the supervisors, but if not you can try the Storm UI REST API http://storm.apache.org/releases/1.2.0/STORM-UI-REST-API.html (use curl or similar).
Related
I am trying to implement a Rest client for Flink to send jobs via Restful Flink services. And also I want to integrate Flink and Kubernetes natively. I have decided to use “Application Mode” as deployment mode according to Flink documentation .
I have already implemented a job and packaged it as jar. And I have tested it on Standalone Flink. But my aim is to move on Kubernetes and deploy my application in Application mode via Rest API of Flink.
I have already investigated the samples at Flink documentation - Native Kubernetes. But I cannot find a sample for executing same samples via Restful services (esp. how to set --target kubernetes-application/kubernetes-session or other parameters).
In addition to samples, I checked out the Flink sources from GitHub and tried to find some sample implementation or get some clue.
I think the below ones are related with my case.
org.apache.flink.client.program.rest. RestClusterClient
org.apache.flink.kubernetes. KubernetesClusterDescriptorTest. testDeployApplicationCluster
But they are all so complicated for me to understand below points.
For application mode, are there any need to initialize a container to serve Flink Rest services before submitting job? If so, is it JobManager?
For application mode, how can I set the same command line parameters via Rest services?
For session mode, in command line samples, kubernetes-session.sh is executed before job submission to initialize a JobManager container. How sould I do this step via Rest client?
For session mode, how can I set the same command line parameters via Rest services? Although the command line samples send .jar job as parameter, should I upload jar before submitting job?
Could you please provide me some clue/sample to continue my implementation?
Best regards,
Burcu
I suspect that if you study the implementation of the Apache Flink Kubernetes Operator you'll find some clues.
I see many docs and posts about how to send logs to Stackdriver but almost no information about how to do the opposite - send logs from the Stackdriver to Kafka.
In my case, our Ops want to collect the logs from our web servers using Google's stackdriver agents and pushing them to stackdriver ... However, for my stream processing needs I want to get the logs into Kafka to use it's unparalleled abilities to retain and reprocess data by any number of consumers, something that I cannot do with PubSub.
So, what are the options for doing this? I only saw a couple of possible avenues - neither sounds too good:
based on this post: (https://powerspace.tech/how-to-stream-data-from-google-pubsub-to-kafka-with-kafka-connect-dbef1c340a76) push data into PubSub first, and then read from it using either Kafka connector or write my own Kafka consumer. I hate the thought of adding yet another hop (serialize/deserialize/ack/etc.) between the source of data and Kafka ....
I noticed a brief mentioning in passing on adding a plugin to Google's version of Fluentd (which is what stackdriver log collection agent is based on) here: https://powerspace.tech/how-to-stream-data-from-google-pubsub-to-kafka-with-kafka-connect-dbef1c340a76 . Not many details - so hard to tell how involved this approach is ...
Any other options?
Thank you!
Enter in to the Kafka console and add certain elements in the console. Once you have added the elements in the Kafka console you need to check if these elements are reflected successfully in the cloud shell. For this you will run the command > $ gcloud pubsub subscriptions pull from-kafka — auto-ack — limit=10 < . Once you run this command it will take some time to sync with the Kafka console. You will get the results after running this command a couple of times.
You will run the commands in the Cloud Shell and see the output in the Kafka VM SSH.
***Image1
Now you will be verifying the exact opposite procedure where in you will be running the command in the Kafka VM and seeing the output in the Cloud Shell. It will take some time for the output to be reflected and you may have to run the command > $ gcloud pubsub subscriptions pull from-kafka — auto-ack — limit=10 < a couple of times to see the output. Your output will look like this
*** image2
The Kafka plugin is deprecated. For more information, refer to https://cloud.google.com/stackdriver/docs/deprecations
Note: This functionality is only available for agents running on Linux. It is not available on Windows.
Kafka is monitored via JMX. Monitoring supports monitoring Kafka version 0.8.2 and higher.
On your VM instance, download kafka-082.conf from the GitHub configuration repository and place it in the directory /etc/stackdriver/collectd.d/:
(cd /etc/stackdriver/collectd.d/ && sudo curl -O https://raw.githubusercontent.com/Stackdriver/stackdriver-agent-service-configs/master/etc/collectd.d/kafka-082.conf)
The downloaded plugin configuration file assumes that your Kafka server is configured to accept JMX connections on port 9999. If you have configured Kafka with a different JMX port, as root, edit the file and follow the instructions to change the JMX port settings.
After adding the configuration file, restart the Monitoring agent by running the following command:
sudo service stackdriver-agent restart
What is monitored:
https://cloud.google.com/monitoring/api/metrics_agent#agent-kafka
I want to run a job(or any other method) on a set of nodes in Rundeck to test the successful connection with the nodes and then tag them as node_name_failled and node_name_succeed. Is this possible to do so using plugins or without them so that it can be achieved in one click
Currently, I'm able to do so by externally parsing the node's execution status and modifying the resource model. But this needs to navigate away from the UI
In Rundeck Enterprise you can use health check feature and later dispatch your jobs with some filter based on their status. A good way to do that in Community Edition is to call the jobs via API or RD CLI wrapped in some bash script that detects the status node before.
Here you have an example to call a job using API and here using RD CLI.
EDIT: Also you can build your own health check system based on this, take a look at this and this.
I am trying to submit spark job via livy using rest api. But if I run same script multiple time it runs multiple instance of a job with different job ID's. I am looking a way to kill spark/yarn job running with same name before starting a new one. Livy document says (https://github.com/cloudera/livy#batch) delete the batch job, but livy sessions doesn't return application name, just application id is returned.
Is there another way to do this ?
For Livy version 0.7.0, the following works.
Where the Session ID you want to stop is 1:
python
import requests
headers = {'Content-Type': 'application/json'}
session_url = 'http://your-livy-server-ip:8998/sessions/1'
requests.delete(session_url, headers=headers)
shell
curl -X DELETE http://your-livy-server-ip:8998/sessions/1
See https://livy.incubator.apache.org/docs/latest/rest-api.html
You can use LivyClient API to submit spark jobs using Livy Server.
LivyClient API has a stop method which can used to kill the job.
LivyClient.close(true);
Sessions that were active when the Livy server was stopped may need to be killed manually. Use the tools from your cluster manager to achieve that (for example, the yarn command line tool).
Run the following command to find the application IDs of the interactive jobs started through Livy.
yarn application -list
Run the following command to kill those jobs.
yarn application –kill "Application ID"
Refer: “https://learn.microsoft.com/en-us/azure/hdinsight/hdinsight-apache-spark-known-issues#livy-leaks-interactive-session”.
I am working with Apache Kafka and using distributed worker. I can start my worker as below:
// Command to start the distributed worker.
"bin/connect-distributed.sh config/connect-distributed.properties"
This is from official documentation. After this we can create connectors and tasks. And this works fine.
But when i change my connector or task logic I should add new jar to classpath of kafka. And after this I should restart worker.
I don't know how it should be right I think we should stop and run worker.
But when I want to stop worker I don't know how i can do it correctly.
ofcourse, I can find my process by ps aux | grep worker, kill it and kill rest server which i should find by ps too. But i think it's strange case. Killing two processes isn't good idea, but i can't find any information how we can do it in another way.
If you know right way, please help me:)
Thanks for your time.
Killing two processes isn't good idea
ConnectDistributed is only one process. There is no separate REST server to stop.
And yes, :connector/pause followed by a kill <pid> is the correct way to stop it.
If installed with a recent version of Confluent Platform, you can stop/start using systemctl.