This is the content of /etc/cloud/cloud.cfg of Ubuntu cloud 16.04 image:
# The top level settings are used as module
# and system configuration.
# A set of users which may be applied and/or used by various modules
# when a 'default' entry is found it will reference the 'default_user'
# from the distro configuration specified below
users:
- default
# If this is set, 'root' will not be able to ssh in and they
# will get a message to login instead as the above $user (ubuntu)
disable_root: true
# This will cause the set+update hostname module to not operate (if true)
preserve_hostname: false
# Example datasource config
# datasource:
# Ec2:
# metadata_urls: [ 'blah.com' ]
# timeout: 5 # (defaults to 50 seconds)
# max_wait: 10 # (defaults to 120 seconds)
# The modules that run in the 'init' stage
cloud_init_modules:
- migrator
- ubuntu-init-switch
- seed_random
- bootcmd
- write-files
- growpart
- resizefs
- disk_setup
- mounts
- set_hostname
- update_hostname
- update_etc_hosts
- ca-certs
- rsyslog
- users-groups
- ssh
# The modules that run in the 'config' stage
cloud_config_modules:
# Emit the cloud config ready event
# this can be used by upstart jobs for 'start on cloud-config'.
- emit_upstart
- snap_config
- ssh-import-id
- locale
- set-passwords
- grub-dpkg
- apt-pipelining
- apt-configure
- ntp
- timezone
- disable-ec2-metadata
- runcmd
- byobu
# The modules that run in the 'final' stage
cloud_final_modules:
- snappy
- package-update-upgrade-install
- fan
- landscape
- lxd
- puppet
- chef
- salt-minion
- mcollective
- rightscale_userdata
- scripts-vendor
- scripts-per-once
- scripts-per-boot
- scripts-per-instance
- scripts-user
- ssh-authkey-fingerprints
- keys-to-console
- phone-home
- final-message
- power-state-change
# System and/or distro specific settings
# (not accessible to handlers/transforms)
system_info:
# This will affect which distro class gets used
distro: ubuntu
# Default user name + that default users groups (if added/used)
default_user:
name: ubuntu
lock_passwd: True
gecos: Ubuntu
groups: [adm, audio, cdrom, dialout, dip, floppy, lxd, netdev, plugdev, sudo, video]
sudo: ["ALL=(ALL) NOPASSWD:ALL"]
shell: /bin/bash
# Other config here will be given to the distro class and/or path classes
paths:
cloud_dir: /var/lib/cloud/
templates_dir: /etc/cloud/templates/
upstart_dir: /etc/init/
package_mirrors:
- arches: [i386, amd64]
failsafe:
primary: http://archive.ubuntu.com/ubuntu
security: http://security.ubuntu.com/ubuntu
search:
primary:
- http://%(ec2_region)s.ec2.archive.ubuntu.com/ubuntu/
- http://%(availability_zone)s.clouds.archive.ubuntu.com/ubuntu/
- http://%(region)s.clouds.archive.ubuntu.com/ubuntu/
security: []
- arches: [armhf, armel, default]
failsafe:
primary: http://ports.ubuntu.com/ubuntu-ports
security: http://ports.ubuntu.com/ubuntu-ports
ssh_svcname: ssh
As you can see, package-update-upgrade-install is put in final stage, where runcmd is put in config stage. According to cloud-init document, modules in config stage are executed before final stage. As I understand, runcmd will be executed before package install.
However, the following code runs without any error:
packages:
- shorewall
runcmd:
- echo "printing shorewall version"
- shorewall version
That means runcmd can be executed after package install.
Is there any reason that make cloud-init disrespect the execution order defined in /etc/cloud/cloud.cfg?
While investigating how to get cloud-init to run things earlier in the boot process, I saw this too. In my testing, it appeared to me that runcmd was running in the config stage as you would expect, but all it was doing was creating a shell script from the runcmd data, which it put in /var/lib/cloud/instance/scripts/runcmd. Cloud-init then ran the shell script during the scripts-user module in the final stage. Below are bits from the /var/log/cloud-init.log log showing this:
"Mar 15 17:12:24 cloud-init[2796]: stages.py[DEBUG]: Running module runcmd (<module 'cloudinit.config.cc_runcmd' from '/usr/lib/python2.7/dist-packages/cloudinit/config/cc_runcmd.pyc'>) with frequency once-per-instance",
"Mar 15 17:12:24 cloud-init[2796]: util.py[DEBUG]: Writing to /var/lib/cloud/instances/i-xxx/sem/config_runcmd - wb: [644] 20 bytes",
"Mar 15 17:12:24 cloud-init[2796]: helpers.py[DEBUG]: Running config-runcmd using lock (<FileLock using file '/var/lib/cloud/instances/i-xxx/sem/config_runcmd'>)",
"Mar 15 17:12:24 cloud-init[2796]: util.py[DEBUG]: Shellified 1 commands.",
"Mar 15 17:12:24 cloud-init[2796]: util.py[DEBUG]: Writing to /var/lib/cloud/instances/i-xxx/scripts/runcmd - wb: [700] 50 bytes",
...
"Mar 15 17:12:40 cloud-init[2945]: stages.py[DEBUG]: Running module scripts-user (<module 'cloudinit.config.cc_scripts_user' from '/usr/lib/python2.7/dist-packages/cloudinit/config/cc_scripts_user.pyc'>) with frequency once-per-instance",
"Mar 15 17:12:40 cloud-init[2945]: util.py[DEBUG]: Writing to /var/lib/cloud/instances/i-xxx/sem/config_scripts_user - wb: [644] 20 bytes",
"Mar 15 17:12:40 cloud-init[2945]: helpers.py[DEBUG]: Running config-scripts-user using lock (<FileLock using file '/var/lib/cloud/instances/i-xxx/sem/config_scripts_user'>)",
"Mar 15 17:12:40 cloud-init[2945]: util.py[DEBUG]: Running command ['/var/lib/cloud/instance/scripts/runcmd'] with allowed return codes [0] (shell=True, capture=False)",
Hope this helps...
Related
I am new to saltstack and I am trying to run a job using the SaltStack Config.
I have a master and minion(Windows machine).
the init.sls file is the following:
{% set machineName = salt['pillar.get']('machineName', '') %}
C:\\Windows\\temp\\salt\\scripts:
file.recurse:
- user: Administrator
- group: Domain Admins
- file_mode: 755
- source: salt://PROJECTNAME/DNS/scripts
- makedirs: true
run-multiple-files:
cmd.run:
- cwd: C:\\Windows\\temp\\salt\\scripts
- names:
- ./dns.bat {{ machineName }}
and the dns.bat file:
Get-DnsServerResourceRecordA -ZoneName corp.local
The main idea is to create DNS record, but for now I am trying to run only this command to check things out, but I get the following info message when I run the job:
"The minion has not yet returned. Please try again later."
I went to check out in the master and ran the command: salt-run manage.status and got the following:
salt-run manage.status
/usr/lib/python3.7/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.23) or chardet (4.0.0) doesn't match a supported version!
RequestsDependencyWarning)
down:
- machine1
- machine2
- machine3
up:
- saltmaster
I tried some commands, to restart the machines, but still no success.
Any help would be appreciated! Thanks in advance!
I supply the below cloud-init script through Azure portal in creating a VM. and the script never runs. appreciate if anyone can suggest what's wrong with my #cloud-config upload.
observation -
ubuntuVMexscript.sh is written
test.sh is NOT written in home directory
/etc/cloud/cloud.cfg doesn't show the change of [scripts-user, always] in final modules.
#cloud-config
package_upgrade: true
write_files:
- owner: afshan:afshan
path: /var/lib/cloud/scripts/per-boot/ubuntuVMexscript.sh
permissions: '0755'
content: |
#!/bin/sh
cat > testCat < /var/lib/cloud/scripts/per-boot/ubuntuVMexscript.sh
- owner: afshan:afshan
path: /home/afshan/test.sh
permissions: '0755'
content: |
#!/bin/sh
echo "test"
cloud_final_modules:
- rightscale_userdata
- scripts-vendor
- scripts-per-once
- scripts-per-boot
- scripts-per-instance
- [scripts-user, always]
- ssh-authkey-fingerprints
- keys-to-console
- phone-home
- final-message
- power-state-change
write_files runs before any user/group creation. Does the afshan user exist when write_files is being run? If not, attempting to set the own on the first file will throw an exception, and the write_files module will exit before attempting to create the second file. You can see if this is happening by checking /var/log/cloud-init.log on your instance.
/etc/cloud/cloud.cfg won't get updated by user data. It will stay as-is on disk, but your user data changes will get merged on top of it.
scripts-user refers to scripts written to /var/lib/cloud/instance/scripts. You haven't written anything there, so I'm not sure the purpose of your [scripts-user, always] change. If you're just looking to run a script every boot, the scripts-per-boot module (without any changes) should be fine. Every boot, it will run what's written to /var/lib/cloud/scripts/per-boot
The MetaCPAN Travis CI coverage builds are quite slow. See https://travis-ci.org/metacpan/metacpan-web/builds/238884497 This is likely in part because we're not successfully ignoring the /local folder that gets created by Carton as part of our build. See https://coveralls.io/builds/11809290
We're using perl-helpers to help with our Travis configuration. I thought I should be able to use the DEVEL_COVER_OPTIONS environment variable in order to fix this, but I guess I don't have the correct incantation. I've included the entire config below because a few snippets out of context seemed misleading.
language: perl
perl:
- "5.22"
matrix:
fast_finish: true
allow_failures:
- env: COVERAGE=1 USE_CPANFILE_SNAPSHOT=true
- env: USE_CPANFILE_SNAPSHOT=false HARNESS_VERBOSE=1
env:
global:
# Carton --deployment only works on the same version of perl
# that the snapshot was built from.
- DEPLOYMENT_PERL_VERSION=5.22
- DEVEL_COVER_OPTIONS="-ignore ^local/"
matrix:
# Get one passing run with coverage and one passing run with Test::Vars
# checks. If run together they more than double the build time.
- COVERAGE=1 USE_CPANFILE_SNAPSHOT=true
- USE_CPANFILE_SNAPSHOT=false HARNESS_VERBOSE=1
- USE_CPANFILE_SNAPSHOT=true
before_install:
- git clone git://github.com/travis-perl/helpers ~/travis-perl-helpers
- source ~/travis-perl-helpers/init
- npm install -g less js-beautify
# Pre-install from backpan to avoid upgrade breakage.
- cpanm -n http://cpan.metacpan.org/authors/id/M/ML/MLEHMANN/common-sense-3.6.tar.gz
- cpanm -n App::cpm Carton
install:
- cpan-install --coverage # installs converage prereqs, if enabled
- 'cpm install `test "${USE_CPANFILE_SNAPSHOT}" = "false" && echo " --resolver metadb" || echo " --resolver snapshot"`'
before_script:
- coverage-setup
script:
# Devel::Cover isn't in the cpanfile
# but if it's installed into the global dirs this should work.
- carton exec prove -lr -j$(test-jobs) t
after_success:
- coverage-report
notifications:
email:
recipients:
- olaf#seekrit.com
on_success: change
on_failure: always
irc: "irc.perl.org#metacpan-travis"
# Use newer travis infrastructure.
sudo: false
cache:
directories:
- local
The syntax for the Devel::Cover options on the command line is weird. You need to put stuff comma-separated. At least when you use PERL5OPT.
DEVEL_COVER_OPTIONS="-ignore,^local/"
See for example https://github.com/simbabque/AWS-S3/blob/master/.travis.yml#L26, where it's a whole lot of stuff with commas.
PERL5OPT=-MDevel::Cover=-ignore,"t/",+ignore,"prove",-coverage,statement,branch,condition,path,subroutine prove -lrs t
Is it possible to do the same thing in Saltsack, but by embedded functionality (without powershell workaround)?
installation:
cmd.run:
- name: ./installation_script
wait for installation:
cmd.run:
- name: powershell -command "Start-Sleep 10"
- unless: powershell -command "Test-Path #('/path/to/file/to/appear')"
Unfortunately there is not a better way to do this in the current version of salt. But there was retry logic added into states in the next release Nitrogen.
The way I would do this in that release is.
installation:
cmd.run:
- name: ./installation_script
wait for installation:
cmd.run:
- name: Test-Path #('/path/to/file/to/appear')
- retry:
- attempts: 15
- interval: 10
- until: True
- shell: powershell
And this will continue to run the Test-Path until it exits with a 0 exit code (or whatever the equivalent is in powershell)
https://docs.saltstack.com/en/develop/ref/states/requisites.html#retrying-states
Daniel
NB: While using retry, pay attention to the indent, it has to be 4 spaces from retry key to form a dictionary for salt. Otherwise it will default to 2 attempts with 30s interval. (2017.7.0.)
wait_for_file:
file.exists:
- name: /path/to/file
- retry:
attempts: 15
interval: 30
How do I specify dependencies between init scripts on CentOS?
E.g. I need that when service "tomcat" is started it first start service "soffice".
On Gentoo we can do:
depend() {
need soffice
}
But what about CentOS?
CentOS out of the box uses an integer to specify the start/stop.
If you look inside an init script you'll most likely see: chkconfig: - 85 15
First number: start priority (higher = lower priority)
Second: Stop priority (lower = lower priority)
If you hop into /etc/rc3.d (or depending on run level).
Files start with either an S (start) or a K (kill, stop) followed by an integer. Same concept applies in regards to numerics.
In some cases you'll see: chkconfig: - 2345 85 15
To change order, simply adjust those numbers.
This simply represents the run levels (2,3,4,5).
There's a section in init script:
### BEGIN INIT INFO
....
### END INIT INFO
Probably you'll need something like this:
### BEGIN INIT INFO
# Provides: tomcat
# Required-Start: $network
# Required-Stop: $network
# Default-Start: 3 4 5
# Default-Stop: 0 1 6
# X-Start-Before: soffice
# Short-Description: xxxx
# Description: xxxx
### END INIT INFO
More info:
https://wiki.debian.org/LSBInitScripts
After modifying this section you should disable and then enable tomcat service again:
chkconfig --del tomcat
chkconfig --add tomcat