Run logstash forwarder and web as a daemon on ubuntu - daemon

I am currently using logstash-1.4.2. In which you don't get the standard monolithic or flat jar which used to be the case earlier. Now, i want to start the logstash forwarder as a service.
bin/logstash -f logforwareder.conf
Above command runs it in the foreground. So it gets killed everytime i close/exit the terminal.
Similarly, for the LogStash indexer how do i achieve the same.
bin/logstash -f indexer.conf web
This command also kills indexer once terminal is closed.

On ubuntu, don't forget to have your own configuration file at /etc/logstash-forwarder (without .conf).
{
"network":
{
"servers": [
"logstash.server.ip:5000"
],
"ssl ca":
"/etc/ssl/certs/logstash-forwarder.crt",
"timeout": 15
},
"files": [
{
"paths": [
"/var/log/apache2/access_web1.log",
"/var/log/apache2/access_web2.log"
],
"fields": {
"type": "apache",
"environment": "production"
}
}
]
}
You can check the script to see where is it looking for the config file:
vi /etc/init.d/logstash-forwarder

Related

Use command inside a VSCode configuration

As per the documentation given here, I wish to add a text prompt box when I start my debug configuration. My launch.json file is as follows -
{ "version": "2.0.0",
"configurations": [
{
"name": "Docker Attach my container",
"type": "coreclr",
"request": "attach",
"processId": "${command:pickRemoteProcess}",
"pipeTransport": {
"pipeProgram": "docker",
"pipeArgs": [ "exec", "-i", "${input:containerName}" ],
"debuggerPath": "/vsdbg/vsdbg",
"pipeCwd": "${workspaceRoot}",
"quoteArgs": false
}
}
],
"inputs": [
{
"id": "containerName",
"type": "promptString",
"description": "Please enter container name",
"default": "my-container"
}
]
}
However with this VSCode does not give the prompt for me to enter container name. Any ideas why this would be the case?
Also further question, ideally I wish to execute a shell script that can run docker ps + some grep to filter out the correct container name automatically. So if that can be done and then passed to this configuration as an argument, that would be even ideal.
For the second part you can use the extension Command Variable to use the content of a file as a variable of via a Key-Value pair.
Write a shell script that does your docker ps and grep that produces the result in a file in a preLaunchTask.
Use the command extension.commandvariable.file.content in an ${input:xxxx} variable and use the extension to read the content of the file to be used in the launch command.

Telepresence fails, saying my namespace doesn't exist, pointing to problems with my k8s context

I've been working with a bunch of k8s clusters for a while, using kubectl from the command line to examine information. I don't actually call kubectl directly, I wrap it in multiple scripting layers. I also don't use contexts, as it's much easier for me to specify different clusters in a different way. The resulting kubectl command line has explicit --server, --namespace, and --token parameters (and one other flag to disable tls verify).
This all works fine. I have no trouble with this.
However, I'm now trying to use telepresence, which doesn't give me a choice (yet) of not using contexts to configure this. So, I now have to figure out how to use contexts.
I ran the following (approximate) command:
kubectl config set-context mycontext --server=https://host:port --namespace=abc-def-ghi --insecure-skip-tls-verify=true --token=mytoken
And it said: "Context "mycontext " modified."
I then ran "kubectl config view -o json" and got this:
{
"kind": "Config",
"apiVersion": "v1",
"preferences": {},
"clusters": [],
"users": [],
"contexts": [
{
"name": "mycontext",
"context": {
"cluster": "",
"user": "",
"namespace": "abc-def-ghi"
}
}
],
"current-context": "mycontext"
}
That doesn't look right to me.
I then ran something like this:
telepresence --verbose --swap-deployment mydeployment --expose 8080 --run java -jar target/my.jar -Xdebug -Xrunjdwp:transport=dt_socket,address=5000,server=y,suspend=n
And it said this:
T: Error: Namespace 'abc-def-ghi' does not exist
Update:
And I can confirm that this isn't a problem with telepresence. If I just run "kubectl get pods", it fails, saying "The connection to the server localhost:8080 was refused". That tells me it obviously can't connect to the k8s server. The key is my "set-context" command. It's obviously not working, and I don't understand what I'm missing.
You don't have any clusters or credentials defined in your configuration. First, you need to define a cluster:
$ kubectl config set-cluster development --server=https://1.2.3.4 --certificate-authority=fake-ca-file
Then something like this for the user:
$ kubectl config set-credentials developer --client-certificate=fake-cert-file --client-key=fake-key-seefile
Then you define your context based on your cluster, user and namespace:
$ kubectl config set-context dev-frontend --cluster=development --namespace=frontend --user=developer
More information here
Your config should look something like this:
$ kubectl config view -o json
{
"kind": "Config",
"apiVersion": "v1",
"preferences": {},
"clusters": [
{
"name": "development",
"cluster": {
"server": "https://1.2.3.4",
"certificate-authority-data": "DATA+OMITTED"
}
}
],
"users": [
{
"name": "developer",
"user": {
"client-certificate": "fake-cert-file",
"client-key": "fake-key-seefile"
}
}
],
"contexts": [
{
"name": "dev-frontend",
"context": {
"cluster": "development",
"user": "developer",
"namespace": "frontend"
}
}
],
"current-context": "dev-frontend"
}

EMR cluster bootstrap + setting environment variables cluster-wise

I am trying to create an EMR cluster (through the command line) and give it some bootstrap actions and configurations file.
The aim is setting some SPARK/Yarn vars, and some other environment variables that should be used cluster-wise (so these env vars should be available on the master AND the slaves).
I am giving it a configurations file that looks like this:
[
{
"Classification": "yarn-env",
"Properties": {},
"Configurations": [
{
"Classification": "export",
"Properties": {
"appMasterEnv.SOME_VAR": "123",
"nodemanager.vmem-check-enabled": "false",
"executor.memoryOverhead": "5g"
},
"Configurations": [
]
}
]
},
{
"Classification": "spark-env",
"Properties": {},
"Configurations": [
{
"Classification": "export",
"Properties": {
"appMasterEnv.SOME_VAR": "123",
"PYSPARK_DRIVER_PYTHON": "python36",
"PYSPARK_PYTHON": "python36",
"driver.memoryOverhead": "14g",
"driver.memory": "14g",
"executor.memory": "14g"
},
"Configurations": [
]
}
]
}
]
However when I try to add some steps to the cluster, the step fails claiming it does not know about the environment variable SOME_VAR.
Traceback (most recent call last):
File "..", line 9, in <module>.
..
raise EnvironmentError
OSError
(The line number is where I am trying to use the environment var SOME_VAR)
Am I doing it the right way both for SOME_VAR and the other Spark/Yarn vars?
Thank you
Remove appMasterEnv in front of appMasterEnv.SOME_VAR, as user lenin suggested.
Use classification yarn-env to pass environment variables to the worker nodes.
Use classification spark-env to pass environment variables to the driver, with deploy mode client. When using deploy mode cluster, use yarn-env.

What are the correct beginsPattern and endsPattern for a background Task in VSCode?

I have a static website (i.e. just html and client side JavaScript) that I serve with python while debugging locally. I have a VSCode task that will start python correctly and am trying to set that task as the preLaunchTask on a Debugger for Chrome launch task. The desired behavior is that whenever I start debugging the serve task below ensures the site is being served.
If I understand background tasks correctly one can set a beginsPattern and endsPattern to signal state changes.
I am expecting that when python echos
Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ...
to stdout that the problemMatcher below would signal to the launch task that it had started. Instead, the launch task waits forever, and doesn't proceed until the task's shell command is terminated.
Can tasks be configured to achieve this sort of behavior?
Launch Configuration
{
"version": "0.2.0",
"configurations": [
{
"type": "chrome",
"request": "launch",
"name": "Launch Chrome against localhost",
"url": "http://localhost:8000",
"webRoot": "${workspaceFolder}/webroot",
"preLaunchTask": "serve"
}
]
}
Serve Task
{
"version": "2.0.0",
"tasks": [
{
"label": "serve",
"type": "shell",
"command": "python3 -m http.server",
"windows": {
"command": "py -m http.server"
},
"isBackground": true,
"options": {
"cwd": "${workspaceFolder}/webroot"
},
"presentation": {
"echo": true,
"reveal": "always",
"focus": false,
"panel": "dedicated"
},
"problemMatcher": {
"owner": "custom",
"pattern":[
{
"regexp": "^([^\\s].*)$",
"file": 1,
"location": 2,
"message": 3
}
],
"background": {
"activeOnStart": true,
"beginsPattern":"^Serving HTTP (.*)$",
"endsPattern":"^Keyboard interrupt received, exiting(.*)$"
}
}
}
]
}
So we also had a similar problem: wanted to set up a Debugger on a Django app running inside Docker. On my setup, the debugger launched a preLaunchTask which starts the remote interpreter debugger, among other things (like installing ptvsd.
Original Steps:
preLaunchTask calls a script (./run-debug.sh).
This script calls remote debugger with this command:
docker container exec -it my_app python debug.py runserver --noreload --nothreading 0.0.0.0:8000
On the debug.py file, there's a print statement to know that the debugger started.
That didn't work, apparently, VSCode doesn't catch the output of the debugger. Instead, on the run-debug.sh file I added an echo statement: Starting debugger session: which VSCode caught ^_^. That fixed the issue for me.
tasks.json, relevant problem matcher:
"problemMatcher": {
"pattern": [
{
"regexp": ".",
"file": 1,
"location": 2,
"message": 3
}
],
"background": {
"beginsPattern": "^Starting debugger session:",
"endsPattern": ".",
}
}
run-debug.sh script, relevant part:
# Start remote process
echo 'Starting debugger session:' #VSCode beginsPattern will catch this!
docker container exec -it my_app python debug.py runserver --noreload --nothreading 0.0.0.0:8000

sensu-client check-memory sample work working

I am trying to get sensu working.
The following is the sensu-client.log
ubuntu#ip:~$ sudo tail -f /var/log/sensu/sensu-client.log
{"timestamp":"2016-09-27T16:07:37.628182-0400","level":"info","message":"completing checks in progress","checks_in_progress":[]}
{"timestamp":"2016-09-27T16:07:38.128912-0400","level":"info","message":"closing client tcp and udp sockets"}
{"timestamp":"2016-09-27T16:07:38.129275-0400","level":"warn","message":"stopping reactor"}
{"timestamp":"2016-09-27T16:07:39.224377-0400","level":"warn","message":"loading config file","file":"/etc/sensu/config.json"}
{"timestamp":"2016-09-27T16:07:39.224487-0400","level":"warn","message":"loading config files from directory","directory":"/etc/sensu/conf.d"}
{"timestamp":"2016-09-27T16:07:39.224528-0400","level":"warn","message":"loading config file","file":"/etc/sensu/conf.d/check_mem.json"}
{"timestamp":"2016-09-27T16:07:39.224573-0400","level":"warn","message":"config file applied changes","file":"/etc/sensu/conf.d/check_mem.json","changes":{}}
{"timestamp":"2016-09-27T16:07:39.224618-0400","level":"warn","message":"applied sensu client overrides","client":{"name":"localhost","address":"127.0.0.1","subscriptions":["test","client:localhost"]}}
{"timestamp":"2016-09-27T16:07:39.230963-0400","level":"warn","message":"loading extension files from directory","directory":"/etc/sensu/extensions"}
{"timestamp":"2016-09-27T16:07:39.231048-0400","level":"info","message":"configuring sensu spawn","settings":{"limit":12}}
/etc/sensu/client.json contains
{
"rabbitmq": {
"host": "ipaddressofsensuserver",
"port": 5672,
"user": "username",
"password": "password",
"vhost": "/sensu"
},
"api": {
"host": "localhost",
"port": 4567
},
"checks": {
"test": {
"command": "echo -n OK",
"subscribers": [
"test"
],
"interval": 60
},
"memory-percentage": {
"command": "check-memory-percent.sh -w 50 -c 70",
"interval": 10,
"subscribers": [
"test"
]
}
},
"client": {
"name": "localhost",
"address": "127.0.0.1",
"subscriptions": [
"test"
]
}
}
I have copied check-memory-present.sh into /etc/sensu/conf.d folder
I was expecting the log file to run check-memory-percent every 10 seconds. What am I missing here ?
The Sensu client cannot operate entirely independent of the server, but it can schedule its own checks to run and have them be sent to the server through the transport (RabbitMQ in this case). You'll have to add "standalone": true to the check configuration in order to have this take effect, and then restart the sensu-client service.
So, the file /etc/sensu/conf.d/check_mem.json should look something like:
"checks": {
"memory-percentage": {
"command": "/etc/sensu/conf.d/check-memory-percent.sh -w 50 -c 70",
"interval": 10,
"standalone": true
}
}
Remember to remove the block from /etc/sensu/client.json as well, as you may get unexpected results if you have the same check name defined multiple times.
In Client.json, under "client", you need to add the subscriptions. Like in the example here. It should match the definition of "subscribers" for your check.