Spacewalk some packages not being sync'd - centos

I am trying to troubleshoot why packages would not clone.
As an example, the latest tzdata package, currently it is only in my official mirror channel:
[root#pro-adm9000 ~]# spacewalk-report channel-packages | grep tzdata-2015[fg] | grep centos
centos7-x86_64-updates,CentOS 7 Updates (x86_64),tzdata,2015f,1.el7,,noarch,tzdata-2015f-1.el7.noarch
centos7-x86_64-updates,CentOS 7 Updates (x86_64),tzdata,2015g,1.el7,,noarch,tzdata-2015g-1.el7.noarch
pit-centos7-x86_64-updates,Point In Time Centos 7 x86_64 Updates,tzdata,2015f,1.el7,,noarch,tzdata-2015f-1.el7.noarch
So version g exists in the official mirror channel, but not the point in time channel.
I tried doing a manual sync in case my conf file was incorrect:
[root#pro-adm9000 scripts]# spacewalk-clone-by-date --username admin --channels=centos7-x86_64-updates pit-centos7-x86_64-updates --to_date 2015-12-11 -a centos7-x86_64 pit-centos7-x86_64
Gave it 24 hours after verifying taskomatic is running, but still only 2015f available in my cloned channel (pit-centos7-x86_64-updates)
I've gone down the fsck route as well to ensure no data issues exist
spacewalk-data-fsck -v -r -R
The original snippet for cloning is as follow:
"centos7-x86_64-updates": {
"label": "pit-centos7-x86_64-updates",
"name": "Point In Time Centos 7 x86_64 Updates",
"summary": "Point In Time Updates Summary",
"description": "Point In Time Updates Desc",
"parent": "pit-centos7-x86_64"
},
With the following defaults:
{
"username":"admin",
"password":"XXXXXX",
"assumeyes":true,
"skip_depsolve":false,
"security_only":false,
"use_update_date":false,
"no_errata_sync":false,
"dry_run":false,
"blacklist": {"ALL":["cglib-*"]
},
I know the repodata is being rebuilt by taskomatic:
INFO | jvm 1 | 2015/12/09 10:25:52 | 2015-12-09 10:25:52,390 [Thread-53] INFO com.redhat.rhn.taskomatic.task.repomd.RepositoryWriter - Repository metadata generation for 'pit-centos7-x86_64-updates' finished in 33 seconds

Related

Liquibase via Docker - Changelog is not written to disk

I want to set up Liquibase (using Docker) for a PostgreSQL database running locally (not in a container). I followed multiple tutorials, including the one on Docker Hub.
As suggested I've created a liquibase.docker.properties file in my <PATH TO CHANGELOG DIR>
classpath: /liquibase/changelog
url: jdbc:postgresql://localhost:5432/mydb?currentSchema=public
changeLogFile: changelog.xml
username: myuser
password: mypass
to be able to run docker run --rm --net="host" -v <PATH TO CHANGELOG DIR>:/liquibase/changelog liquibase/liquibase --defaultsFile=/liquibase/changelog/liquibase.docker.properties <COMMAND>.
When I run [...] generateChangeLog I get the following output (with option --logLevel info):
[2021-04-27 06:08:20] INFO [liquibase.integration] No Liquibase Pro license key supplied. Please set liquibaseProLicenseKey on command line or in liquibase.properties to use Liquibase Pro features.
Liquibase Community 4.3.3 by Datical
####################################################
## _ _ _ _ ##
## | | (_) (_) | ##
## | | _ __ _ _ _ _| |__ __ _ ___ ___ ##
## | | | |/ _` | | | | | '_ \ / _` / __|/ _ \ ##
## | |___| | (_| | |_| | | |_) | (_| \__ \ __/ ##
## \_____/_|\__, |\__,_|_|_.__/ \__,_|___/\___| ##
## | | ##
## |_| ##
## ##
## Get documentation at docs.liquibase.com ##
## Get certified courses at learn.liquibase.com ##
## Free schema change activity reports at ##
## https://hub.liquibase.com ##
## ##
####################################################
Starting Liquibase at 06:08:20 (version 4.3.3 #52 built at 2021-04-12 17:08+0000)
BEST PRACTICE: The changelog generated by diffChangeLog/generateChangeLog should be inspected for correctness and completeness before being deployed.
[2021-04-27 06:08:22] INFO [liquibase.diff] changeSets count: 1
[2021-04-27 06:08:22] INFO [liquibase.diff] changelog.xml does not exist, creating and adding 1 changesets.
Liquibase command 'generateChangeLog' was executed successfully.
It looks like the command ran "successfully" but I could not find the file changelog.xml in my local directory which I mounted, i.e. <PATH TO CHANGELOG DIR>. The mounting however has to be working since it connects to the database successfully, i.e. the container is able to access and read liquibase.docker.properties.
First I thought I might have to "say" to Docker that it is allowed to write on my disk but it seems that this should be supported [from the description on Docker Hub]:
The /liquibase/changelog volume can also be used for commands that write output, such as generateChangeLog
What am I missing? Thanks in advance for any help!
Additional information
Output of docker inspect:
"Mounts": [
{
"Type": "bind",
"Source": "<PATH TO CHANGELOG DIR>",
"Destination": "/liquibase/changelog",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
},
...
],
When you run generateChangeLog, the path to the file should be specified as /liquibase/changelog/changelog.xml even though for update it needs to be changelog.xml
Example:
docker run --rm --net="host" -v <PATH TO CHANGELOG DIR>:/liquibase/changelog liquibase/liquibase --defaultsFile=/liquibase/changelog/liquibase.docker.properties --changeLogFile=/liquibase/changelog/changelog.xml generateChangeLog
For generateChangeLog, the changeLogFile argument is the specific path to the file to output vs. a path relative to the classpath setting that update and other commands use.
When you include the command line argument as well as a defaultsFile like above, the command line argument wins. That lets you leverage the same default settings while replacing specific settings when specific commands need more/different ones.
Details
There is a distinction between operations that are creating files and ones that are reading existing files.
With Liquibase, you almost always want to use paths to files that are relative to directories in the classpath like the examples have. The specified changelogFile gets stored in the tracking system, so if you ever run the same changelog but referenced in a different way (because you moved the root directory or are running from a different machine) then Liquibase will see it as a new file and attempt to re-run already ran changesets.
That is why the documentation has classpath: /liquibase/changelog and changeLogFile: com/example/changelog.xml. The update operation looks in the /liquibase/changelog dir to find a file called com/example/changelog.xml and finds it and stores the path as com/example/changelog.xml.
GenerateChangeLog is one of those "not always relative to classpath" cases because it needs to know where to store the file. If you just specify the output changeLogFile as changelog.xml it creates just creates that file relative to your process's working directory which is not what you are needing/expecting.
TL;DR
Prefix the changelog filename with /liquibase/changelog/ and pass it as a command line argument:
[...] --changeLogFile /liquibase/changelog/changelog.xml generateChangelog
See Nathan's answer for details.
Explanation
I launched the container with -it and overwrote the entrypoint to get an interactive shell within the container (see this post):
docker run --net="host" -v <PATH TO CHANGELOG DIR>:/liquibase/changelog -it --entrypoint /bin/bash liquibase/liquibase -s
Executing ls yields the following:
liquibase#ubuntu-rafael:/liquibase$ ls
ABOUT.txt UNINSTALL.txt docker-entrypoint.sh liquibase
GETTING_STARTED.txt changelog examples liquibase.bat
LICENSE.txt changelog.txt lib liquibase.docker.properties
README.txt classpath licenses liquibase.jar
Notable here is docker-entrypoint.sh which actually executes the liquibase command, and the folder changelog which is mounted to my local <PATH TO CHANGELOG DIR> (my .properties file is in there).
Now I ran the same command as before but now inside the container:
sh docker-entrypoint.sh --defaultsFile=/liquibase/changelog/liquibase.docker.properties --logLevel info generateChangeLog
I got the same output as above but guess what reveals when running ls again:
ABOUT.txt changelog examples liquibase.docker.properties
GETTING_STARTED.txt changelog.txt lib liquibase.jar
LICENSE.txt changelog.xml ...
The changelog actually exists! But it is created in the wrong directory...
If you prefix the changelog filename with /liquibase/changelog/, the container is able to write it to your local (mounted) disk.
P.S. This means that the description of the "Complete Example" using "a properties file" from here is not working. I will open an Issue for that.
UPDATE
Specifying the absolute path is only necessary for commands that write a new file, e.g. generateChangeLog (see Nathan's answer). But it is better practise to pass the absolute path via command line so that you can keep the settings in the defaults-file.

GitHub Actions: How to access to the log of current build via Terminal

I'm trying to get familiar with Github Actions. I have configured my workflow in a way, that every time I push my code to GitHub, the code will automatically be built and pushed to heroku.
How can I access the build log information in terminal without going to github.com?
With the latest cli/cli tool named gh (1.9.0+), you can simply do
(from your terminal, without going to github.com):
gh run view <jobId> --log
# or
gh run view <jobId> --log-failed
See "Work with GitHub Actions in your terminal with GitHub CLI"
With the new gh run list, you receive an overview of all types of workflow runs whether they were triggered via a push, pull request, webhook, or manual event.
To drill down into the details of a single run, you can use gh run view, optionally going into as much detail as the individual steps of a job.
For more mysterious failures, you can combine a tool like grep with gh run view --log to search across a run’s entire log output.
If --log is too much information, gh run --log-failed will output only the log lines for individual steps that failed.
This is great for getting right to the logs for a failed step instead of having to run grep yourself.
And with GitHub CLI 2.4.0 (Dec. 2021), gh run list comes with a --json flag for JSON export.
Use
curl \
-H "Accept: application/vnd.github.v3+json" \
https://api.github.com/repos/<github-user>/<repository>/actions/workflows/<workflow.yaml>/runs
https://docs.github.com/en/free-pro-team#latest/rest/reference/actions#list-workflow-runs
This will return a JSON with the following structure:
{
"total_count": 1,
"workflow_runs": [
{
"id": 30433642,
"node_id": "MDEyOldvcmtmbG93IFJ1bjI2OTI4OQ==",
"head_branch": "master",
"head_sha": "acb5820ced9479c074f688cc328bf03f341a511d",
"run_number": 562,
"event": "push",
"status": "queued",
"conclusion": null,
"workflow_id": 159038,
"url": "https://api.github.com/repos/octo-org/octo-repo/actions/runs/30433642",
"html_url": "https://github.com/octo-org/octo-repo/actions/runs/30433642",
"pull_requests": [],
"created_at": "2020-01-22T19:33:08Z",
"updated_at": "2020-01-22T19:33:08Z",
"jobs_url": "https://api.github.com/repos/octo-org/octo-repo/actions/runs/30433642/jobs",
"logs_url": "https://api.github.com/repos/octo-org/octo-repo/actions/runs/30433642/logs",
"check_suite_url": "https://api.github.com/repos/octo-org/octo-repo/check-suites/414944374",
"artifacts_url": "https://api.github.com/repos/octo-org/octo-repo/actions/runs/30433642/artifacts",
"cancel_url": "https://api.github.com/repos/octo-org/octo-repo/actions/runs/30433642/cancel",
"rerun_url": "https://api.github.com/repos/octo-org/octo-repo/actions/runs/30433642/rerun",
"workflow_url": "https://api.github.com/repos/octo-org/octo-repo/actions/workflows/159038",
"head_commit": {...},
"repository": {...},
"head_repository": {...}
]
}
Access the jobs_url with a PAT that has repository admin rights.

Does osquery inotify install watcher on directory or files

I am using osquery to monitor files and folders to get events on any operation on those files. There is a specific syntax for osquery configuration:
"/etc/": watches the entire directory at a depth of 1.
"/etc/%": watches the entire directory at a depth of 1.
"/etc/%%": watches the entire tree recursively with /etc/ as the root.
I am trying to evaluate the memory usage in case of watching a lot of directories. In this process I found the following statistics:
"/etc", "/etc/%", "/etc/%.conf": only 1 inotify handle is found registered in the name of osquery.
"/etc/%%: a few more than 289 inotify handles found which are registered in the name of osquery, given that there are a total of 285 directories under the tree. When checking the entries in /proc/$PID/fdinfo, all the inodes listed in the file points to just folders.
eg: for "/etc/%.conf"
$ grep -r "^inotify" /proc/$PID/fdinfo/
18:inotify wd:1 ino:120001 sdev:800001 mask:3ce ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:01001200bc0f1cab
$ printf "%d\n" 0x120001
1179649
$ sudo debugfs -R "ncheck 1179649" /dev/sda1
debugfs 1.43.4 (31-Jan-2017)
Inode Pathname
1179649 //etc
The inotify watch is established on the whole directory here, but the events are only reported for the matching files /etc/*.conf. Is osquery filtering the events based on the file_paths supplied, which is what I am assuming, but not sure.
Another experiment that I performed to support the above claim was, use the source in the inotify(7) and run a watcher on a particular file. When I check the list of inotify watchers, it just shows :
$ ./a.out /tmp/inotify.cc &
$ cat /proc/$PID/fdinfo/3
...
inotify wd:1 ino:1a1 sdev:800001 mask:38 ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:a1010000aae325d7
$ sudo debugfs -R "ncheck 417" /dev/sda1
debugfs 1.43.4 (31-Jan-2017)
Inode Pathname
417 /tmp/inotify.cc
So, according to this experiment, establishing a watcher on a single file is possible (which is clear from the inotify man page). This supports the claim that osquery is doing some sort of filtering based on the file patterns supplied.
Could someone verify the claim or present otherwise?
My osquery config:
{
"options": {
"host_identifier": "hostname",
"schedule_splay_percent": 10
},
"schedule": {
"file_events": {
"query": "SELECT * FROM file_events;",
"interval": 5
}
},
"file_paths": {
"sys": ["/etc/%.conf"]
}
}
$ osqueryd --version
osqueryd version 3.3.2
$ uname -a
Linux lab 4.9.0-6-amd64 #1 SMP Debian 4.9.88-1+deb9u1 (2018-05-07) x86_64 GNU/Linux
It sounds like some great sleuthing!
I think the comments in the source code support that. It's worth skimming it. I think the relevant files:
https://github.com/osquery/osquery/blob/master/osquery/tables/events/linux/file_events.cpp
https://github.com/osquery/osquery/blob/master/osquery/events/linux/inotify.cpp

jhipster kubectl - unable to decode " ": Object 'Kind' is missing

When running this command:
kubectl apply -f tenten
I get this error:
unable to decode "tenten\.angular-cli.json": Object 'Kind' is missing in '{
"project": {
"$schema": "./node_modules/#angular/cli/lib/config/schema.json",
"name": "tenten"
},
"apps": [{
"root": "src/main/webapp/",
"outDir": "target/www/app",
"assets": [
"content",
"favicon.ico"
],
"index": "index.html",
"main": "app/app.main.ts",
"polyfills": "app/polyfills.ts",
"test": "",
"tsconfig": "../../../tsconfig.json",
"prefix": "jhi",
"mobile": false,
"styles": [
"content/scss/vendor.scss",
"content/scss/global.scss"
],
"scripts": []
}],
It looks like you're running this from the parent directory of your applications. You should 1) create a directory that's parallel to your applications and 2) run yo jhipster:kubernetes in it. Then run kubectl apply -f tenten in that directory after you've built and pushed your docker images. For example, here's the output when I run it from the kubernetes directory in my jhipster-microservices-example project.
± yo jhipster:kubernetes
_-----_
| | ╭──────────────────────────────────────────╮
|--(o)--| │ Update available: 2.0.0 (current: 1.8.5) │
`---------´ │ Run npm install -g yo to update. │
( _´U`_ ) ╰──────────────────────────────────────────╯
/___A___\ /
| ~ |
__'.___.'__
´ ` |° ´ Y `
⎈ [BETA] Welcome to the JHipster Kubernetes Generator ⎈
Files will be generated in folder: /Users/mraible/dev/jhipster-microservices-example/kubernetes
WARNING! kubectl 1.2 or later is not installed on your computer.
Make sure you have Kubernetes installed. Read http://kubernetes.io/docs/getting-started-guides/binary_release/
Found .yo-rc.json config file...
? Which *type* of application would you like to deploy? Microservice application
? Enter the root directory where your gateway(s) and microservices are located ../
2 applications found at /Users/mraible/dev/jhipster-microservices-example/
? Which applications do you want to include in your configuration? (Press <space> to select, <a> to toggle all, <i> to i
nverse selection)blog, store
JHipster registry detected as the service discovery and configuration provider used by your apps
? Enter the admin password used to secure the JHipster Registry admin
? What should we use for the Kubernetes namespace? default
? What should we use for the base Docker repository name? mraible
? What command should we use for push Docker image to repository? docker push
Checking Docker images in applications' directories...
ls: no such file or directory: /Users/mraible/dev/jhipster-microservices-example/blog/target/docker/blog-*.war
identical blog/blog-deployment.yml
identical blog/blog-service.yml
identical blog/blog-postgresql.yml
identical blog/blog-elasticsearch.yml
identical store/store-deployment.yml
identical store/store-service.yml
identical store/store-mongodb.yml
conflict registry/jhipster-registry.yml
? Overwrite registry/jhipster-registry.yml? overwrite this and all others
force registry/jhipster-registry.yml
force registry/application-configmap.yml
WARNING! Kubernetes configuration generated with missing images!
To generate Docker image, please run:
./mvnw package -Pprod docker:build in /Users/mraible/dev/jhipster-microservices-example/blog
WARNING! You will need to push your image to a registry. If you have not done so, use the following commands to tag and push the images:
docker image tag blog mraible/blog
docker push mraible/blog
docker image tag store mraible/store
docker push mraible/store
You can deploy all your apps by running:
kubectl apply -f registry
kubectl apply -f blog
kubectl apply -f store
Use these commands to find your application's IP addresses:
kubectl get svc blog
See the end of my blog post Develop and Deploy Microservices with JHipster for more information.

Where to find logs for a cloud-init user-data script?

I'm initializing spot instances running a derivative of the standard Ubuntu 13.04 AMI by pasting a shell script into the user-data field.
This works. The script runs. But it's difficult to debug because I can't figure out where the output of the script is being logged, if anywhere.
I've looked in /var/log/cloud-init.log, which seems to contain a bunch of stuff that would be relevant to debugging cloud-init, itself, but nothing about my script. I grepped in /var/log and found nothing.
Is there something special I have to do to turn logging on?
The default location for cloud init user data is already /var/log/cloud-init-output.log, in AWS, DigitalOcean and most other cloud providers. You don't need to set up any additional logging to see the output.
You could create a cloud-config file (with "#cloud-config" at the top) for your userdata, use runcmd to call the script, and then enable output logging like this:
output: {all: '| tee -a /var/log/cloud-init-output.log'}
so I tried to replicate your problem. Usually I work in Cloud Config and therefore I just created a simple test user-data script like this:
#!/bin/sh
echo "Hello World. The time is now $(date -R)!" | tee /root/output.txt
echo "I am out of the output file...somewhere?"
yum search git # just for fun
ls
exit 0
Notice that, with CloudInit shell scripts, the user-data "will be executed at rc.local-like level during first boot. rc.local-like means 'very late in the boot sequence'"
After logging in into my instance (a Scientific Linux machine) I first went to /var/log/boot.log and there I found:
Hello World. The time is now Wed, 11 Sep 2013 10:21:37 +0200! I am
out of the file. Log file somewhere? Loaded plugins: changelog,
kernel-module, priorities, protectbase, security,
: tsflags, versionlock 126 packages excluded due to repository priority protections 9 packages excluded due to repository
protections ^Mepel/pkgtags
| 581 kB 00:00
=============================== N/S Matched: git =============================== ^[[1mGit^[[0;10mPython.noarch : Python ^[[1mGit^[[0;10m Library c^[[1mgit^[[0;10m.x86_64 : A fast web
interface for ^[[1mgit^[[0;10m
...
... (more yum search output)
...
bin etc lib lost+found mnt proc sbin srv tmp var
boot dev home lib64 media opt root selinux sys usr
(other unrelated stuff)
So, as you can see, my script ran and was rightly logged.
Also, as expected, I had my forced log 'output.txt' in /root/output.txt with the content:
Hello World. The time is now Wed, 11 Sep 2013 10:21:37 +0200!
So...I am not really sure what is happening in you script.
Make sure you're exiting the script with
exit 0 #or some other code
If it still doesn't work, you should provide more info, like your script, your boot.log, your /etc/rc.local, and your cloudinit.log.
btw: what is your cloudinit version?