Is there a repository for downloading sample component logs of openstack? - redhat

I am new to OpenStack, is there a repository where I can download OpenStack sample logs of each component to be used for analysis. I need to look for logs of each component that has a log level af error and critical. Found some logs on github and openstack.org but it is more on nova, glance and neutron logs for the other components there is no log file that can be found, need to find logs about cinder, swift, heat and ceilometer can someone help me with this?
Thanks in advance. :)

You can get logs from Openstack Zuul. It's the CI infrastructure for Openstack that tests all new code. It's not going to be exactly the same as regular production logs, as it may contain experimental code, but should give you an idea at least.
http://zuul.openstack.org/
If you click on one of the jobs to expand the list, and then pick one of the jobs you should be able to get logs from all components used in that test.

Related

CloudFormation: stack is stuck, CloudTrail events shows repeating DeleteNetworkInterface event

I am deploying a stack with CDK. It gets stuck in CREATE_IN_PROGRESS. CloudTrail logs show repeating events in logs:
DeleteNetworkInterface
CreateLogStream
What should I look at next to continue debugging? Is there a known reason for this to happen?
I also saw the exact same issue with the deployment of a CDK-based ECS/Fargate Deployment
In my instance, I was able to diagnose the issue by following the content from the AWS support article https://aws.amazon.com/premiumsupport/knowledge-center/cloudformation-stack-stuck-progress/
What specifically diagnosed and then resolved it for me:-
I updated my ECS service to set the desired task count of the ECS Service to 0. At that point the Cloud Formation stack did complete successfully.
From that, it became obvious that the actual issue was related to the creation of the initial task for my ECS Service. I was able to diagnose that by reviewing the output in Deployment and Events Tab of the ECS Service in the AWS Management Console. In my case, the task creation was failing because of an issue with accessing the associated ECR repository. Obviously there could be other reasons but they should show-up there.

Is it possible to have hot reload with Kubernetes?

I am trying to get into the way of things with the Kubernetes but I'm facing a problem with hot reload.
In the development mode when I am just working on the code and I need the code be synchronized with the pods directly like in Docker when I use volumes to keep the state.
Is there any chance to make it work with the Kubernetes?
I would be thankful for any help with Kubernetes...
From the view of Cloud native(or kubernetes), the infrastructure is immutable and the Pods are the smallest deployable units. So you should replace the pod rather than change it(your code is part of the pod/image). so the correct process is: change code -> build image -> recreate pod in your env But actually, your process still could work just not follow the best practice of cloud native... –
vincent pli
Also, you can try Ksync, that allows you to synchronize application code between your local and Kubernetes cluster. Kindly ask you to refer to official documentation to read more about.

Master Kubernetes nodes offline GKE (mutliple clusters and projects)

This morning we noticed that all Kubernetes clusters in all projects ( 2 projects, 2 clusters per project ) showed unavailable / ERROR in the Google Cloud Console.
The dashboard shows no current issues: https://status.cloud.google.com/
It basically looks like the master nodes are down, the API does not respond and the clusters cannot be edited in the UI. Before the weekend everything was up and since at least yesterday evening they all show in red.
The deployed services fortunately respond, but we cannot manage the cluster in any way.
I reported it here too:
https://issuetracker.google.com/issues/172841082
Did anyone else encounter this and is there any way to restart or trigger the master node to restart? I cannot edit the cluster so an upgrade is not possible either.
I read elsewhere that only SRE folks from Google can (re)start them.
It's beyond me how this can happen.
By the way, auto-repair is set to on and I followed the troubleshooting page, basically with all paths leading to : master node down, nothing to be done.
Any help would be greatly appreciated, or simply a SRE doing a start node action ;).
Thank you #dany L, it was indeed a billing issue.
I'm surprised there is nothing like a message in the Cloud Console and one has to go to billing specifically to find out about this.
After billing was fixed, it took a few minutes while before the clusters were available, then everything looked back to normal.

GCP stackdriver logging logs format changed in bucket from folder per container to stdout\stderr

i have a question, similar as describe here: GKE kubernetes container stdout logs format changed
in old version of stackdriver i had 1 sink with filter like this:
resource.type=container,
resource.namespace_id=[NAMESPACE_NAME]
resource.pod_id=[POD_NAME]
and logs was stored in bucket pretty well, like this:
logName=projects/[PROJECT-NAME]/logs/[CONTAINER-NAME]
...so i had folders whith logs for each container.
But now i updated my stackdriver logging+monitoring to last version and now i have 2 folders stdout\stderr which contains all logs for all containers!
logName=projects/[PROJECT-NAME]/logs/stdout
logName=projects/[PROJECT-NAME]/logs/stderr
All logs from many containers stored in this single folders! This is pretty uncomfortable =(
I'v read about this in docs: https://cloud.google.com/monitoring/kubernetes-engine/migration#changes_in_log_entry_contents
The logName field might change. Stackdriver Kubernetes Engine Monitoring log entries use stdout or stderr in their log names whereas Legacy Stackdriver used a wider variety of names, including the container name. The container name is still available as a resource label.
...but i can't find solution! Please, help me, how to make container per folder logging, like it was in old version of stackdriver?
Here is a workaround that has been suggested:
Create a different sink for each of your containers filtered by
resource.labels.container_name
Export each sink to a different
bucket
Note: If you configure each separate sink to the same bucket the logs will be combined.
More details at Google Issue Tracker

Logging Kubernetes with an external ELK stack

Is there any documentation out there on sending logs from containers in K8s to an external ELK cluster running on EC2 instances?
We're in the process of trying to Kubernetes set up and I'm trying to figure out how to get the logging to work correctly. We already have an ELK stack setup on EC2 for current versions of the application but most of the documentation out there seems to be referring to ELK as it's deployed to the K8s cluster.
I am also working on the same cause.
First you should know what driver is being used by your docker containers to manage the logs (json driver/ journald etc - read here).
After that you should use some log collector in your architecture to send the logs to the Logstash endpoint. You can use filebeat/fluent bit. They are light weight alternatives to logstash/fluentd respectively. You must use one of them and not directly send your logs to logstash via syslog since these log shippers have a special functionality of enriching your logs with kubernetes metadata of the respective containers.
There might be lot of challenges after that. Parsing log data (multiline logs for example) etc. For an efficient pipeline, it’s better to do most of the work (i.e. extracting the date object from the logs etc) at the log sender side, than using the common logstash for this purpose that might be a bottle-neck.
Note that in case the container logs are not sent to stdout/stderr but written else-where, you might need to run filebeat/fluent-bit as side-car with your containers.
As for the links for documentation are concerned, I myself didn’t find anything documented in a single place on this, but the keywords that I mentioned over, reading about them I got to know many things.
Hope this helps.