Snowflake snowalert - running selective alerts - snowflake-schema

Is there a way to run only specific snowalert alerts
I am setting up snowalerts and as part of that I am running the snowalerts with the following command:
docker run --env-file file.envs snowsec/snowalert ./run all
or
docker run --env-file file.envs snowsec/snowalert ./run alerts
Is there a way to run only specific alerts?
What I want to ask is that currently on either commands ALL alerts get run.
I would like to run specific alerts so that I can schedule them separately.
Looking forward to any advice around that

Related

How to wait for full cloud-initialization before VM is marked as running

I am currently configuring a virtual machine to work as an agent within Azure (with Ubuntu as image). In which the additional configuration is running through a cloud init file.
In which, among others, I have the below 'fix' within bootcmd and multiple steps within runcmd.
However the machine already gives the state running within the azure portal, while still running the cloud configuration phase (cloud_config_modules). This has as a result pipelines see the machine as ready for usage while not everything is installed/configured yet and breaks.
I tried a couple of things which did not result in the desired effect. After which I stumbled on the following article/bug;
The proposed solution worked, however I switched to a rhel image and it stopped working.
I noticed this image is not using walinuxagent as the solution states but waagent, so I tried to replacing that like the example below without any success.
bootcmd:
- mkdir -p /etc/systemd/system/waagent.service.d
- echo "[Unit]\nAfter=cloud-final.service" > /etc/systemd/system/waagent.service.d/override.conf
- sed "s/After=multi-user.target//g" /lib/systemd/system/cloud-final.service > /etc/systemd/system/cloud-final.service
- systemctl daemon-reload
After this, also tried to set the runcmd steps to the bootcmd steps. This resulted in a boot which took ages and eventually froze.
Since I am not that familiar with rhel and Linux overall, I wanted to ask help if anyone might have some suggestions which I can additionally try.
(Apply some other configuration to ensure await on the cloud-final.service within a waagent?)
However the machine already had the state running, while still running the cloud configuration phase (cloud_config_modules).
Could you please be more specific? Where did you read the machine state?
The reason I ask is that cloud-init status will report status: running until cloud-init is done running, at which point it will report status: done
I what is the purpose of waiting until cloud-init is done? I'm not sure exactly what you are expecting to happen, but here are a couple of things that might help.
If you want to execute a script "at the end" of cloud-init initialization, you could put the script directly in runcmd, and if you want to wait for cloud-init in an external script you could do cloud-init status --wait, which will print a visual indicator and eventually return once cloud-init is complete.
On not too old Azure Linux VM images, cloud-init rather than WALinuxAgent acts as the VM provisioner. The VM is marked provisioned by the Azure cloud-init datasource module very early during cloud-init processing (source), before any cloud-init modules configurable with user data. WALinuxAgent is only responsible for provisioning Azure VM extensions. It does not appear to be possible to delay sending the 'VM ready' signal to Azure without modifying the VM image and patching the source code of cloud-init Azure datasource.

Run PowerShell on Windows Container start and keep it running

I've been experimenting and searching for a long time without finding an answer that works.
I have a Windows Container and I need to embed a startup script for each time a new container is created.
All the answers I found suggest one of the following:
Add the command to the dockerfile - this is not good because it will only run when the image is built. I need it to run every single time a new container is created from the image,
use docker exec after starting a container - this is also not what I need. These images are intended to be "shippable". I need the script to run without any special action apart from creating a new container.
Using ENTRYPOINT - I had 2 cases here. It either fails and immediately exits. Or it succeeds but the container stops. I need it to keep running.
Basically, the goal of this is to do some initial configuration on the container when it starts and keep it running.
The actions are around generating a GUID and registering the hostname. These have to be unique which is why I need to run them immediately when the container starts.
Looks like CMD in the dockerfile is all I needed. I used:
CMD powershell -file
I simply checked in the script if it's the first time it is running

Airflow: what do `airflow webserver`, `airflow scheduler` and `airflow worker` exactly do?

I've been working with Airflow for a while now, which was set up by a colleague. Lately I run into several errors, which require me to more in dept know how to fix certain things within Airflow.
I do understand what the 3 processes are, I just don't understand the underlying things that happen when I run them. What exactly happens when I run one of the commands? Can I somewhere see afterwards that they are running? And if I run one of these commands, does this overwrite older webservers/schedulers/workers or add a new one?
Moreover, if I for example run airflow webserver, the screen shows some of the things that are happening. Can I simply get out of this by pressing CTRL + C? Because when I do this, it says things like Worker exiting and Shutting down: Master. Does this mean I'm shutting everything down? How else should I get out of the webserver screen then?
Each process does what they are built to do while they are running (webserver provides a UI, scheduler determines when things need to be run, and workers actually run the tasks).
I think your confusion is that you may be seeing them as commands that tell some sort of "Airflow service" to do something, but they are each standalone commands that start the processes to do stuff. ie. Starting from nothing, you run airflow scheduler: now you have a scheduler running. Run airflow webserver: now you have a webserver running. When you run airflow webserver, it is starting a python flask app. While that process is running, the webserver is running, if you kill command, is goes down.
All three have to be running for airflow as a whole to work (assuming you are using an executor that needs workers). You should only ever had one scheduler running, but if you were to run two processes of airflow webserver (ignoring port conflicts, you would then have two separate http servers running using the same metadata database. Workers are a little different in that you may want multiple worker processes running so you can execute more tasks concurrently. So if you create multiple airflow worker processes, you'll end up with multiple processes taking jobs from the queue, executing them, and updating the task instance with the status of the task.
When you run any of these commands you'll see the stdout and stderr output in console. If you are running them as a daemon or background process, you can check what processes are running on the server.
If you ctrl+c you are sending a signal to kill the process. Ideally for a production airflow cluster, you should have some supervisor monitoring the processes and ensuring that they are always running. Locally you can either run the commands in the foreground of separate shells, minimize them and just keep them running when you need them. Or run them in as a background daemon with the -D argument. ie airflow webserver -D.

Ansible custom tool ,retry and redeploy

I am trying to use Ansible as a deployment tool for a set of hosts and I'm not able to find the right way of doing it.
I want to run a custom tool that installs rpm in a host.
Now I can do
ansible dev -i hosts.txt -m shell -a "rpmdeployer --install package_v2.rpm"
But this doesn't give a retry file(failed hosts)
I made a playbook to get the retry file
I tried a simple playbook
---
- hosts: dev
tasks:
- name: deployer
command: rpmdeployer --install package_v2.rpm
I know this not in the spirit of ansible to execute custom commands and scripts. Is there a better way of doing this? Also is there a way to keep trying till all hosts succeeds?
Is there a better way of doing this?
You can write a custom module. The custom module could even be the tool, so you get rid of installing that dependency. Modules can be written in any language but it is advisable to use Python because:
Python is a requirement of Ansible anyway
When using Python you can use the API provided by Ansible
If you'd have a custom module for your tool your task could look like this:
- name: deployer
deployer: package_v2.rpm
Also is there a way to keep trying till all hosts succeeds?
Ansible can automatically retry tasks.
- name: deployer
command: rpmdeployer --install package_v2.rpm
register: result
until: result | success
retries: 42
delay: 1
This works, given your tool returns correct exit codes (0 on success and >0 on failure). If not you can apply any custom condition, e.g. search the stdout for content etc.
I'm not aware of a tool to automatically retry when the playbook actually failed. But it shouldn't be too hard create a small wrapper script to check for the retry file and run the playbook with --limit #foo.retry until it is not re-created.
But I'm not sure that makes sense. If installing an rpm with your tool fails, I guess it is guaranteed it will also fail on any retries, unless there are unknown components in the play like downloading the rpm in the first place. So of course the download could fail then and a retry might succeed.

openshift pod fails and restarts frequently

I am creating an app in Origin 3.1 using my Docker image.
Whenever I create image new pod gets created but it restarts again and again and finally gives status as "CrashLoopBackOff".
I analysed logs for pod but it gives no error, all log data is as expected for a successfully running app. Hence, not able to determine the cause.
I came across below link today, which says "running an application inside of a container as root still has risks, OpenShift doesn't allow you to do that by default and will instead run as an arbitrary assigned user ID."
What is CrashLoopBackOff status for openshift pods?
Here my image is using root user only, what to do to make this work? as logs shows no error but pod keeps restarting.
Could anyone please help me with this.
You are seeing this because whatever process your image is starting isn't a long running process and finds no TTY and the container just exits and gets restarted repeatedly, which is a "crash loop" as far as openshift is concerned.
Your dockerfile mentions below :
ENTRYPOINT ["container-entrypoint"]
What actually this "container-entrypoint" doing ?
you need to check.
Did you use the -p or --previous flag to oc logs to see if the logs from the previous attempt to start the pod show anything
The recommendation of Red Hat is to make files group owned by GID 0 - the user in the container is always in the root group. You won't be able to chown, but you can selectively expose which files to write to.
A second option:
In order to allow images that use either named users or the root (0) user to build in OpenShift, you can add the project’s builder service account (system:serviceaccount::builder) to the privileged security context constraint (SCC). Alternatively, you can allow all images to run as any user.
Can you see the logs using
kubectl logs <podname> -p
This should give you the errors why the pod failed.
I am able to resolve this by creating a script as "run.sh" with the content at end:
while :; do
sleep 300
done
and in Dockerfile:
ADD run.sh /run.sh
RUN chmod +x /*.sh
CMD ["/run.sh"]
This way it works, thanks everybody for pointing out the reason ,which helped me in finding the resolution. But one doubt I still have why process gets exited in openshift in this case only, I have tried running tomcat server in the same way which just works fine without having sleep in script.