Are requisites required, or is order sufficient? - deployment

The Salt docs are full of this kind of pattern:
apache:
pkg:
- installed
service:
- running
- require:
- pkg: apache
This repetition ("install apache, now check whether apache was installed") seems to be a violation of don't-repeat-yourself (DRY). So is it necessary?
From "Understanding State Ordering":
To accomplish something similar to how classical imperative systems function all requisites can be omitted and the failhard option then set to True in the master configuration, this will stop all state runs at the first instance of a failure.
This seems to imply that the use of requisites everywhere is actually optional (assuming that the declaration order is correct) - but I'd like to know for sure.

It is a remnant of the pre 0.15 days when states weren't executed top down.
Ordering is now sufficient.

States are now executed in the order they are declared in your sls files. Where you will still want to use "require" is if you want to ensure a certain state executes successfully before another.
For example, you may want to ensure a software package is installed correctly before attempting to lay down a config file.
apache:
pkg:
- installed
file:
- managed
- name: /etc/apache/httpd.conf
- source: salt://apache/httpd.conf
- require:
- pkg: apache
Without the "require" in the above example, the config file would be laid down even if the apache pkg failed to install.

Related

Go Stackdriver debugger error loading program

I am trying to set up Stackdriver debugging using Go. Using the article and this great medium post I came up with this solution.
Key parts, in cloudbuild.yaml
- name: gcr.io/cloud-builders/wget
args: [
"-O",
"go-cloud-debug",
"https://storage.googleapis.com/cloud-debugger/compute-go/go-cloud-debug"
]
...
Dockerfile I have
...
COPY gopath/bin/stackdriver-demo /stackdriver-demo
ADD go-cloud-debug /
ADD source-context.json /
CMD ["/go-cloud-debug","-sourcecontext=./source-context.json", "-appmodule=go-errrep","-appversion=1.0","--","/stackdriver-demo"]
...
However the pods keeps crashing, the container logs show this error:
Error loading program: decoding dwarf section info at offset 0x0: too short
EDIT: Using https://storage.googleapis.com/cloud-debugger/compute-go/go-cloud-debug may be outdated as I haven't seen it used outside Daz's medium post. The official docs uses the package cloud.google.com/go/cmd/go-cloud-debug-agent
I have update cloudbuild.yaml file to install this package:
- name: 'gcr.io/cloud-builders/go'
args: ["get", "-u", "cloud.google.com/go/cmd/go-cloud-debug-agent"]
env: ['PROJECT_ROOT=github.com/roberson34/stackdriver-demo', 'CGO_ENABLED=0', 'GOOS=linux']
- name: 'gcr.io/cloud-builders/go'
args: ["install", "cloud.google.com/go/cmd/go-cloud-debug-agent"]
env: ['PROJECT_ROOT=github.com/roberson34/stackdriver-demo', 'CGO_ENABLED=0', 'GOOS=linux']
And in the Dockerfile I can get access to the binary in gopath/bin/go-cloud-debug-agent
When I execute the gopath/bin/go-cloud-debug-agent with my own program as an argument:
/go-cloud-debug-agent -sourcecontext=./source-context.json -appmodule=go-errrep -appversion=1.0 -- /stackdriver-demo
I get another opaque error:
Error loading program: AttrStmtList not present or not int64 for unit 88
So basically using the cloud-debug binary from https://storage.googleapis.com/cloud-debugger/compute-go/go-cloud-debug and cloud-debug-agent binary from the package cloud.google.com/go/cmd/go-cloud-debug-agent both don't work and give different errors.
Would appreciate any tips on what I'm doing wrong and how to fix it.
OK :-)
Yes, you should follow the current Stackdriver documentation, e.g. go-cloud-debug-agent
Unfortunately, there are now various issues with my post including a (currently broken) gcr.io/cloud-builders/kubectl for regions.
I think your issue pertains to your use of golang:alpine. Alpine uses musl rather than the glibc that you find on most other Linux distro's and so, you really must compile for Alpine to ensure your binaries reference the correct libc.
I'm able to get your solution working primarily by switching your Dockerfile to pull the Cloud Debug Agent while on Alpine and to compile your source on Alpine:
FROM golang:alpine
RUN apk add git
RUN go get -u cloud.google.com/go/cmd/go-cloud-debug-agent
ADD main.go src
RUN CGO_ENABLED=0 go build -gcflags=all='-N -l' src/main.go
ADD source-context.json /
CMD ["bin/go-cloud-debug-agent","-sourcecontext=/source-context.json", "-appmodule=stackdriver-demo","-appversion=1.0","--","main"]
I think that should get you beyond the errors that you documented and you should be able to deploy your container to Kubernetes.
I've made my version of your image publicly available (and will retain it for a few days for you):
gcr.io/dazwilkin-190402-55473323/roberson34#sha256:17cb45f1320e2fe04e0681310506f4c229896429192b0d1c2c8dc20ed54adb0d
You may wish to reference it (by that digest) in your deployment.yaml
NB For Error Reporting to be "interesting", your code needs to generate errors and, with your example, this is going to be challenging (usually a good thing). You may consider adding another errorful handler that always results in errors so that you may test the service.

Disable a standard systemd service in Yocto build

I need to start my own systemd service, let's call it custom.service. I know how to write a recipe for it to be added and enabled on boot:
SYSTEMD_SERVICE_${PN} = "custom.service"
SYSTEMD_AUTO_ENABLE_${PN} = "enable"
However, it conflicts with one of the default systemd services - systemd-timesyncd.service.
Is there a nice preferred way to disable that default systemd service in my bitbake file even though the systemd_XX.bb actually enables it?
I can create a systemd_%.bbappend file to modify the systemd settings, but I can't locate the place where one service can be disabled leaving all others enabled.
The working solution I found is to remove the timesyncd altogether using
PACKAGECONFIG_remove = "timesyncd"
But I wonder if this is a appropriate way and if there is a way to just disable it, but leave in the system.
How about adding a .bbappend recipe for the conflicting service you want disabled. In it, you would add:
SYSTEMD_AUTO_ENABLE_${PN} = "disable"
If the system runs fine with the other package removed, then removing the package is a preferred solution. Fewer packages means a simpler system.
Usually you would set SYSTEMD_AUTO_ENABLE_${PN} = "disable" and that would let the service be part of image but disabled on boot. However for systemd which provides a lot of default service units this may not be a solution you might want to deploy. You could surgically delete the symlink in etc which will ensure that service is not started automatically on boot but the .service file is still part of image. So add following to systemd_%.bbappend file in your layer
do_install_append() {
rm -rf ${D}${sysconfdir}/systemd/system/sysinit.target.wants/systemd-timesyncd.service
}
There are other ways to disable this e.g. using systemd presets as described here
Use the systemd.preset — Service enablement presets and in particular following steps.
Create a .bbappend file meta-xxx/recipes-core/systemd/systemd_%.bbappend with this content:
do_configure_append() {
#disabling autostart of systemd-timesyncd
sed -i -e "s/enable systemd-timesyncd.service/disable systemd-timesyncd.service/g" ${S}/presets/90-systemd.preset
}
In my yocto-based Linux distribution (yocto zeus release) above steps are enough to disable the service which remains installed.
In the output distribution previous steps modify the file /lib/systemd/system-preset/90-systemd.preset.
After the modification, in that file, appear the row: disable systemd-timesyncd.service and this row substitutes the raw: enable systemd-timesyncd.service
At this link there are some information about the topic: systemd.preset — Service enablement presets.
Other useful.
I was not able to use SYSTEMD_AUTO_ENABLE_${PN} = "disable" in this context.
For other recipes (for example dnsmasq_2.82.bb) the previous assignment works correctly and I have used it to enable (or disable) a service in the yocto distribution.
I think the "official" way to do this is to have something like this somewhere in your project:
PACKAGECONFIG_append_pn-systemd = "--disable-timesyncd"
This does basically the same you already suggested. To simply not enable the service you have to do it manually since you can modify the auto enable only per recipe.

Why does Concourse `get` a resource after `put`ing it?

When I configure the following pipeline:
resources:
- name: my-image-src
type: git
source:
uri: https://github.com/concourse/static-golang
- name: my-image
type: docker-image
source:
repository: concourse/static-golang
username: {{username}}
password: {{password}}
jobs:
- name: "my-job"
plan:
- get: my-image-src
- put: my-image
After building and pushing the image to the Docker registry, it subsequently fetches the image. This can take some time and ultimately doesn't really add anything to the build. Is there a way to disable it?
Every put implies a get of the version that was created. There are a few reasons for this:
The primary reason for this is so that the newly created resource can be used by later steps in the build plan. Without the get there is no way to introduce "new" resources during a build's execution, as they're all resolved to a particular version to fetch when the build starts.
There are some side-benefits to doing this as well. For one, it immediately warms the cache on one worker. So it's at least not totally worthless; later jobs won't have to fetch it. It also acts as validation that the put actually had the desired effect.
In this particular case, as it's the last step in the build plan, the primary reason doesn't really apply. But we didn't bother optimizing it away since in most cases the side benefits make it worth not having the secondary question arise ("why do only SOME put steps imply a get?").
It also cannot be disabled as we resist adding so many knobs that you'll want to turn one day and then have to go back and turn back off once you actually do need it back to the default.
Docs: https://concourse-ci.org/put-step.html

Configuring spring-xd to use oracle as job repository

I want to run spring xd with Oracle(11g) which i already have in my environment. Currently my first concern is the jobs UI (my database has existing data of job executions that were performed by spring-batch and i simply want to display the details of those executions).
i'm using spring-xd-1.0.0.M5. I followed the instructions in the reference guide and i changed application.yml to have the following:
spring:
datasource:
url: jdbc:oracle:oci:MY_USERNAME/MYPWD#//orarmydomain.com:1521/myservice
username: MY_USERNAME
password: MYPWD
driverClassName: oracle.jdbc.OracleDriver
profiles:
active: default,oracle
i modified also batch-jdbc.properties to have the database configuration similar to the above.
Yet, when i start xd-singlnode.bat (or either xd-admin.bat) it seems like it ignores my oracle configuration and still uses the default hsqldb.
what am i doing wrong?
Thanks
The likely reason is that we did not upgrade the windows .bat scripts to take advantage of the property overriding via xd-config.yml. If you go into the unix script for xd-singlenode you will see that when java is invoked there there is an option
-Dspring.config.location=$XD_CONFIG
you can for now hardcode your location of that file, use file: as the prefix.
Also, The UI right now is very primitive, you will not be able to see many details about the job execution. There are however many job related commands you can execute in the shell and there is only one gap regarding step execution information as compared to what is available via spring-batch-admin.
The issue to watch for this is https://jira.springsource.org/browse/XD-1209 and it is schedule for the next milestone release.
Let me know how it goes, thanks!
Cheers,
Mark

What's the best Perl module for hierarchical and inheritable configuration?

If I have a greenfield project, what is the best practice Perl based configuration module to use?
There will be a Catalyst app and some command line scripts. They should share the same configuration.
Some features I think I want ...
Hierarchical Configurations to cleanly maintain different development and live settings.
I'd like to define "global" configurations once (eg, results_per_page => 20), have those inherited but override-able by my dev/live configs.
Global:
results_per_page: 20
db_dsn: DBI:mysql;
db_name: my_app
Dev:
inherit_from: Global
db_user: dev
db_pass: dev
Dev_New_Feature_Branch:
inherit_from: Dev
db_name: my_app_new_feature
Live:
inherit_from: Global
db_user: live
db_pass: secure
When I deploy a project to a new server, or branch/fork/copy it somewhere new (eg, a new development instance), I want to (one time only) set which configuration set/file to use, and then all future updates are automatic.
I'd envisage this could be achieved with a symlink:
git clone example.com:/var/git/my_project . # or any equiv vcs
cd my_project/etc
ln -s live.config to_use.config
Then in the future
git pull # or any equiv vcs
I'd also like something that akin to FindBin, so that my configs can either use absolute paths, or relative to the current deployment. Given
/home/me/development/project/
bin
lib
etc/config
where /home/me/development/project/etc/config contains:
tmpl_dir: templates/
when my perl code looks up the tmpl_dir configuration it'll get:
/home/me/development/project/templates/
But on the live deployment:
/var/www/project/
bin
lib
etc/config
The same code would magically return
/var/www/project/templates/
Absolute values in the config should be honoured, so that:
apache_config: /etc/apache2/httpd.conf
would return "/etc/apache2/httpd.conf" in all cases.
Rather than a FindBin style approach, an alternative might be to allow configuration values to be defined in terms of other configuration values?
tmpl_dir: $base_dir/templates
I'd also like a pony ;)
Catalyst::Plugin::ConfigLoader supports multiple overriding config files. If your Catalyst app is called MyApp, then it has three levels of override: 1) MyApp.pm can have a __PACKAGE__->config(...) directive, 2) it next looks for MyApp.yml in the main directory of the app, 3) it looks for MyApp_local.yml. Each level may override settings in each other level.
In a Catalyst app I built, I put all of my immutable settings in MyApp.pm, my debug settings in MyApp.yml, and my production settings in MyApp_<servertype>.yml and then symlinked MyApp_local.yml to point at MyApp_<servertype>.yml on each deployed server (they were all a little different...).
That way, all of my config was in SVN and I just needed one ln -s step to manually config a server.
Perl Best Practices warns against exactly what you want. It states that config files should be simple and avoid the sort of baroque features you desire. It goes on to recommend three modules (none of which are Core Perl): Config::General, Config::Std, and Config::Tiny.
The general rational behind this is that the editing of config files tends to be done by non-programmers and the more complicated you make your config files, the more likely they will screw them up.
All of that said, you might take a look at YAML. It provides a full featured, human readable*, serialization format. I believe the currently recommend parser in Perl is YAML::XS. If you do go this route I would suggest writing a configuration tool for end users to use instead of having them edit the files directly.
ETA: Based on Chris Dolan's answer it sounds like YAML is the way to go for you since Catalyst is already using it (.yml is the de facto extension for YAML files).
* I have heard complaints that blind people may have difficulty with it
YAML is hateful for config - it's not non-programmer friendly partly because yaml in pod is by definition broken as they're both white-space dependent in different ways. This addresses the main problem with Config::General. I've written some quite complicated config files with C::G in the past and it really keeps out of your way in terms of syntax requirements etc. Other than that, Chris' advice seems on the money.