How do I run, for development purposes, cloud-init yaml file that will be normally run via user-data?
I know how I can re-run cloud-init, but I want to develop complicated cloud-init file and to do that it is rather difficult to continually build new instances.
Sorry to say, you're going to have to run it on a new clean instance (or at least a snapshot of one). Even if you did manually go back and start at different steps, there are potentially side effects.
I think you'll find that if you get used to managing local VMs, you can debug your scripts fairly quickly.
The quickest path for iterating on user-data input to cloud-init is probably via lxd. You can quickly set up lxd on a vm host or a bare metal system. Once set up, launches are very quick.
$ cat ud.yaml
#cloud-config
runcmd:
- "read up idle < /proc/uptime; echo Up $up seconds | tee /run/runcmd.log"
$ lxc launch ubuntu-daily:bionic ud-test "--config=user.user-data=$(cat ud.yaml)"
Creating ud-test
Starting ud-test
$ lxc exec ud-test cat /run/runcmd.log
Up 8.05 seconds
$ lxc stop ud-test
$ lxc delete ud-test
You might be able to get away with just running cloud-init clean and then re-running it.
I'm experimenting with cloud-init and using an Ubuntu box with KVM as a virtualization lab. I made a simple Makefile to build the cloud-init image and launch it in an KVM instance.
You can see my code here:
https://github.com/brennancheung/playbooks/blob/master/cloud-init-lab/Makefile
all: clean build run
INSTANCE_NAME := "vm"
CLOUD_IMAGE_FILE = "bionic-server-cloudimg-amd64.img"
CLOUD_IMAGE_BASE_URL := "http://cloud-images.ubuntu.com/bionic/current"
CLOUD_IMAGE_URL := "$(CLOUD_IMAGE_BASE_URL)/$(CLOUD_IMAGE_FILE)"
download:
wget $(CLOUD_IMAGE_URL)
clean:
#echo "Removing build artifacts"
-#rm -f config.img 2>/dev/null
-#virsh destroy $(INSTANCE_NAME) 2>/dev/null || true
-#virsh undefine $(INSTANCE_NAME) 2>/dev/null || true
-#rm -f $(INSTANCE_NAME).img
build:
#echo "Building cloud config drive"
cloud-localds config.img config.yaml
cp $(CLOUD_IMAGE_FILE) $(INSTANCE_NAME).img
run:
#echo "Spawning instance $(INSTANCE_NAME)"
virt-install \
--name $(INSTANCE_NAME) \
--memory 8192 \
--disk ./$(INSTANCE_NAME).img,device=disk,bus=virtio \
--disk ./config.img,device=cdrom \
--os-type linux \
--os-variant ubuntu18.04 \
--virt-type kvm \
--graphics none \
--network bridge=br0
I am not sure why this answer is not here.. maybe it is not applicable for earlier versions.
All I do to re-run cloud-init for dev testing is (, especially when testing user-data changes):
1 - change the config file/files, usually only /etc/cloud/cloud.cfg
2 - run clean:
cloud-init clean -l
-l cleans the logs also
3 - re-run cloud-init
cloud-init init
of course, this has its limitations, depending on the settings you test, cloud-init clean is not going to revert the previous changes, but maybe you'll be able to figure out ways to test. For example I am testing the creation of new users, and every time I change something in the settings for a user and I want to test it.. I create a new user.
Yes, all this is quick in-development test, if you need to truly verify your changes - you need new instance.
Re-running all of cloud-init without system reboo isn't a recommended approach because some parts of cloud-init are run at systemd generator timeframe to detect new datasource types. That said, the following commands will allow you to accomplish this without reboot on a system.
cloud-init supports a clean subcommand to remove all semaphore files and allow cloud-init to re-run all config modules again. Beware that this will mean SSH host-keys are regenerated and .ssh config files re-written so it could impact your ability to get back into the VM.
To clean all semaphores so cloud-init modules will all re-run on next boot:
sudo cloud-init clean --logs
cloud-init typically runs multiple boot stages in sequence due to systemd service dependencies. If you want to repeat that process without a reboot you can run the following 4 commands:
Detect local datasource (cloud platform) and obtain user-data:
sudo cloud-init init --local
Detect any datasources and user-data which require network up and run cloud_init_modules defined in /etc/cloud/cloud.cfg:
sudo cloud-init init
Run all cloud_config_modules defined in /etc/cloud/cloud.cfg:
sudo cloud-init modules --mode=config
Run all cloud_final_modules defined in /etc/cloud/cloud.cfg:
sudo cloud-init modules --mode=final
Related
I'd like to create a GitHub action that sets up an environment in Windows, running a few Powershell commands. Despite this can be done easily as a step, there does not seem to be a way to create a complete GitHub action for that. If I use this:
name: 'Rakudo Star fix for windows'
description: 'Updates zef for RakudoStar'
author: 'JJ'
runs:
using: 'node12'
main: 'upgrade.ps1'
There does not seem a way to run anything other than a JS script, or even to declare the environment. I understand that's left for later, during the job steps, but anyway it looks like a hack. Is there anything I'm missing here?
You could also run docker directly with an entrypoint for the .ps1 script
FROM ubuntu:18.04
LABEL "com.github.actions.name"="test"
LABEL "com.github.actions.description"="test."
RUN apt-get update \
&& apt-get install wget -y \
&& wget -q https://packages.microsoft.com/config/ubuntu/18.04/packages-microsoft-prod.deb \
&& dpkg -i packages-microsoft-prod.deb \
&& apt-get update \
&& apt-get install -y powershell
ADD test.ps1 /test.ps1
ENTRYPOINT ["pwsh", "/test.ps1"]
Update:
The using field is the application to use to execute the code specified in main. But Github Actions only support using node12 and docker. As seen from this GHActions I just ran for example's sake.
Docker won't run in most Windows environment and you'd have to use Windows Server 2019 as your base environment.
I want to run a Bash script prior to either shutdown or reboot of my Pi (running the latest Raspbian, a derivative of Debian).
e.g. if I type in sudo shutdown now or sudo reboot now into the command prompt, it should run my Bash script before continuing with shutdown/reboot.
I created a very simple script just for testing, to ensure I get the method working before I bother writing the actual script:
#!/bin/bash
touch /home/pi/ShutdownFileTest.txt
I then copied the file (called CreateFile.sh) to /etc/init.d/CreateFile
I then created symlinks in /etc/rc0.d/ and /etc/rc6.d/:
sudo ln -s /etc/init.d/CreateFile K99Dave
I'm not certain on what the proper naming should be for the symlink. Some websites say "Start the filename with a K", some say "start with an S", one said: "start with K99 so it runs at the right time"...
I actually ended up trying all of the following (not all at once, of course, but one at a time):
sudo ln -s /etc/init.d/CreateFile S00Dave
sudo ln -s /etc/init.d/CreateFile S99Dave
sudo ln -s /etc/init.d/CreateFile K00Dave
sudo ln -s /etc/init.d/CreateFile K01rpa
sudo ln -s /etc/init.d/CreateFile K99Dave
After creating each symlink, I always ran:
sudo chmod a+x /etc/init.d/CreateFile && sudo chmod a+x /etc/rc6.d/<name of symlink>
I then rebooted each time.
Each time, the file at /home/pi/ShutdownFileTest.txt was not created; the script is not executed.
I found this comment on an older post, suggesting that the above was the outdated method:
The modern way to do this is via systemd. See "man systemd-shutdown"
for details. Basically, put an executable shell script in
/lib/systemd/system-shutdown/. It gets passed an argument like "halt"
or "reboot" that allows you to distinguish the various cases if you
need to.
I copied my script into /lib/systemd/system-shutdown/, chmod +x'd it, and rebooted, but still no success.
I note the above comment says that the script is passed "halt" or "reboot" as an argument. As it should run identically in both cases, I assume it shouldn't need to actually deal with that argument. I don't know how to deal with that argument, either, so I'm not sure if I need to do something to make that work or not...
Could someone please tell me where I'm going wrong?
Thanks in advance,
Dave
As it turns out, part of the shutdown command has already executed (and unmounted the filesystem) before these scripts are executed.
Therefore, mounting the filesystem at the start of the script and unmounting it at the end is necessary.
Simply add:
mount -oremount,rw /
...at the start of the script (beneath the #!/bin/bash)
...then have the script's code...
and then finish the script with:
mount -oremount,ro /
So, the OP script should become:
#!/bin/bash
mount -oremount,rw /
touch /home/pi/ShutdownFileTest.txt
mount -oremount,ro /
...that then creates the file /home/pi/ShutdownFileTest.txt just before shutdown/reboot.
That said, it may not be best practice to use this method. Instead, it is better to create a service that runs whenever the computer is on and running normally, but runs the desired script when the service is terminated (which happens at shutdown/reboot).
This is explained in detail here, but essentially:
1: Create a file (let's call it example.service).
2: Add the following into example.service:
[Unit]
Description=This service calls shutdownScript.sh upon shutdown or reboot.
[Service]
Type=oneshot
RemainAfterExit=true
ExecStop=/home/pi/shutdownScript.sh
[Install]
WantedBy=multi-user.target
3: Move it into the correct directory for systemd by running sudo mv /home/pi/example.service /etc/systemd/system/example.service
4: Ensure the script to launch upon shutdown has appropriate permissions: chmod u+x /home/pi/shutdownScript.sh
5: Start the service: sudo systemctl start example --now
6: Make the service automatically start upon boot: sudo systemctl enable example
7: Stop the service: sudo systemctl stop example
This last command will mimic what would happen normally when the system shuts down, i.e. it will run /home/pi/shutdownScript.sh (without actually shutting down the system).
You can then reboot twice and it should work from the second reboot onwards.
EDIT: nope, no it doesn't. It worked the first time I tested it, but stopped working after that. Not sure why. If I figure out how to get it working, I'll edit this answer and remove this message (or if someone else knows, please feel free to edit the answer for me).
As I a do not have enough senority to post comments this is a new answer for which I appologize.
I added a step to ZPMMaker's answer and it seems to work for me at least.
sudo chmod u+x /etc/systemd/system/example.service
I am working on Play! application with Angular 2 and webjars dependencies.
SBT play plugin and typescript plugin. I use incremental compilation, but it takes great amount of time on each recompilation. I set "sbt-optimizer" to check which tasks are longest one. And i see that on each recompilation WebJars are taking almost all the recompile time. I can't imagine why he needs to do something with static files after first compilation. But even if i change scala file or twirl template, it does not matter again all webjars.
UPD:
If i run on machine without Docker then speed is normal - recompile take few seconds.
Inside Docker - 200sec+.
Compilation messages without docker - 2s, inside docker - 13s.
Operations on screen without docker 10-300ms inside docker 500-60000ms.
UPD:
Adding my docker file
FROM openjdk:8
ENV SCALA_VERSION=2.12.1
ENV SBT_VERSION=0.13.13
ENV NODEJS_VERSION=6.10.0
# Install sbt
RUN cd /tmp && \
wget https://dl.bintray.com/sbt/native-packages/sbt/$SBT_VERSION/sbt-$SBT_VERSION.zip && \
unzip sbt-$SBT_VERSION.zip -d /usr/local && \
rm sbt-$SBT_VERSION.zip
#install nodejs for web jars
RUN cd /tmp && \
wget https://nodejs.org/dist/v$NODEJS_VERSION/node-v$NODEJS_VERSION-linux-x64.tar.xz && \
tar -C /usr/local --strip-components 1 -xJf node-v$NODEJS_VERSION-linux-x64.tar.xz &&
rm node-v$NODEJS_VERSION-linux-x64.tar.xz
Here is printout from optimizer:
May be i missed something, or someone had such problems, why SBT do that each time, and how can i prevent this?
Thank you
If you are running a mounted volume on Docker for Mac, you're probably hitting this issue with performance of mounted volumes.
I tied the following steps
cd $GOPATH/src/github.com/grafana/grafana
go run build.go setup
I got the following
Version: 2.5.0-pre1, Linux Version: 2.5.0, Package Iteration: pre1
go get -v github.com/tools/godep
github.com/tools/godep (download)
github.com/tools/godep/Godeps/_workspace/src/github.com/kr/fs
github.com/tools/godep/Godeps/_workspace/src/github.com/pmezard/go-difflib/difflib
github.com/tools/godep/Godeps/_workspace/src/golang.org/x/tools/go/vcs
github.com/tools/godep
go get -v github.com/blang/semver
github.com/blang/semver (download)
github.com/blang/semver
go get -v github.com/mattn/go-sqlite3
go install -v github.com/mattn/go-sqlite3
then i executed
$GOPATH/bin/godep restore
i got no putput but command got executed
then i ran the command
go run build.go build
Version: 2.5.0-pre1, Linux Version: 2.5.0, Package Iteration: pre1
rm -r bin
rm -r Godeps/_workspace/pkg
rm -r Godeps/_workspace/bin
rm -r dist
rm -r tmp
rm -r /Users/skhare/sk/go/pkg/darwin_amd64/github.com/grafana
rm -r ./bin/grafana-server
rm -r ./bin/grafana-server.md5
GOPATH=/Users/skhare/sk/go/src/github.com/grafana/grafana/Godeps/_workspace:/Users/skhare/sk/go
go build -ldflags -w -X main.version '2.5.0-pre1' -X main.commit 'v2.1.2+394- gfb767f5' -X main.buildstamp 1442671169 -o ./bin/grafana-server .
# github.com/grafana/grafana
link: warning: option -X main.version 2.5.0-pre1 may not work in future releases; use -X main.version=2.5.0-pre1
link: warning: option -X main.commit v2.1.2+394-gfb767f5 may not work in future releases; use -X main.commit=v2.1.2+394-gfb767f5
link: warning: option -X main.buildstamp 1442671169 may not work in future releases; use -X main.buildstamp=1442671169
then i executed
npm install
i had to install npm
>npm install -g grunt-cli
/usr/local/bin/grunt -> /usr/local/lib/node_modules/grunt-cli/bin/grunt
grunt-cli#0.1.13 /usr/local/lib/node_modules/grunt-cli
├── resolve#0.3.1
├── nopt#1.0.10 (abbrev#1.0.7)
└── findup-sync#0.1.3 (lodash#2.4.2, glob#3.2.11)
>grunt
Running "jscs:src" (jscs) task
>> 156 files without code style errors.
Running "jshint:source" (jshint) task
✔ No problems
Running "jshint:tests" (jshint) task
✔ No problems
Running "tslint:source" (tslint) task
>> 11 files lint free.
Running "clean:gen" (clean) task
Cleaning public_gen...OK
Running "copy:public_to_gen" (copy) task
Created 122 directories, copied 553 files
Running "less:src" (less) task
File public_gen/css/bootstrap.dark.min.css created.
File public_gen/css/bootstrap.light.min.css created.
File public_gen/css/bootstrap-responsive.min.css created.
Running "concat:cssDark" (concat) task
File public_gen/css/grafana.dark.min.css created.
Running "concat:cssLight" (concat) task
File public_gen/css/grafana.light.min.css created.
Running "typescript:build" (typescript) task
42 files created. js: 14 files, map: 14 files, declaration: 14 files (968ms)
Done, without errors.
>go get github.com/Unknwon/bra
the above command did not give any output, nor an error message
bra run
it says -bash: bra: command not found
i tried to look for the resolution, but i could not find it. Please help
Recompile backend on source change
To rebuild on source change (requires that you executed godep restore)
go get github.com/Unknwon/bra
bra run
Running Grafana Locally
You can run a local instance of Grafana by running:
./bin/grafana-server
You must have missed this step!
go get github.com/Unknwon/bra
You can install Grafana using home brew.
brew update
brew install grafana
This sounds like an issue where Go is just being installed to build something else (for me, it was Grafana). In which case $GOPATH/bin is not in your PATH. $GOPATH/bin/bra should work. It did for me.
I suggest you installing Grafana inside Docker. If you install Docker for Mac, the GUI (Kitematic) will allow you to install grafana as easily as one click. You will just need to create a new container with "+ New" button, search grafana through the exisiting image lists and click "Create"
Docker will download grafana and it will appear in the left sidebar:
I'm symlinking my config/unicorn_init.sh to /etc/init.d/unicorn_project with:
sudo ln -nfs config/unicorn_init.sh /etc/init.d/unicorn_<project>
Afterwards, when I run chkconfig --list my unicorn_ script doesn't show. I'm adding my unicorn script to load my application on server load.
Obviously, this is not allowing me to add my script with:
chkconfig unicorn_<project> on
Any help / advice would be awesome :).
Edit:
Also, when I'm in /etc/init.d/ and run:
sudo service unicorn_project start
It says: "unrecognized service"
I figured this out. There were two things wrong with what I was doing:
1) You have to make sure your unicorn script can play nice with chkconfig by adding the below code below #!/bin/bash. Props to digitalocean's blog for the help.
# chkconfig: 2345 95 20
# description: Controls Unicorn sinatra server
# processname: unicorn
2) I was attempting to symlink the config/unicorn_init.sh file when I was already in the project directory which was creating a dangling symlink (pink colored symlink ~> should be teal) by using a relative path. To fix this, I removed the dangling symlink and provided the absolute path to the unicorn_init.sh file.
To debug this I used ll in the /etc/init.d/ directory to see r,w,x permissions and file types, was running chkconfig --list to see a list of services in /etc/init.d/ and also was trying to run the dangling symlink in my /etc/init.d directory with sudo service unicorn_<project> restart
Hope this helps someone.