How to start Github Action (self-runner) on own machine with Ubuntu and Docker? - github

I successfully downloaded and started GitHub runner on my own Ubuntu machine. GitLab have a nice runner installer, but GitHub have only package with files and run.sh file.
When I will start the run.sh, it works and GitHub runner starts for listening of actions.
But I couldn't find it anywhere in the documentation, how to correctly integrate the package with sh file into Ubuntu to start sh automatically after Ubuntu starts.
Also I didn't find, if there is needed to do some steps to make runner secure from the internet.
...and also I don't know, where I can setup to be able to run parallel actions, and also where I can setup the limitations of resources, etc...
Thanks a lot for an each help.

Related

Actions module not found when deploying through fab.dev?

I am deploying my next.js app through fab.dev to cloudflare workers and after generating the fab.zip file I'm unable to deploy it.
Here are some of the steps you can try ,
Reinstall fab actions module
npm install #fab/actions
Make sure you are running fab in bash env or any linux env
it doesnt really work well with windows powershell

AWX 9.1.1: j2 render Playbook runs successfully, but won't write results out to directory; running from CLI works properly

I have a weird situation that I think is a bug in the latest release of AWX (v9.1.1). In fact, I have registered this issue as a possible bug with AWX development (Issue #5818). From the report:
SUMMARY
When executing a j2 template rendering Playbook role from AWX, the Playbook runs without error, but the rendered file is never written out to the directory. If you run the Playbook from the CLI, it runs without error and will write out the file correctly.
ENVIRONMENT
AWX version: 9.1.1
AWX install method: Docker Compose
Ansible version: AWX v2.8.5; CLI environment 2.9.4; have downgraded the CLI to 2.8.5 and no change in behavior.
Operating System: Ubuntu 18.04.4 LTS
Web Browser: Chrome
STEPS TO REPRODUCE
Simply execute the Playbook role from CLI - successful. Execute within AWX - successful but no rendered template file is written.
EXPECTED RESULTS
Expect the template to render and show Change=1 as the completed status. Running the job once more should result in Change=0 due to idempotency.
ACTUAL RESULTS
No matter how many times the Playbook is ran in AWX, it still shows Change=1 (idempotency is indicating that the rendered file and existing file don't match).
--
One other piece of info noted during the debugs is that AWX 9.1.1 apparently uses Python3 in its venv; whereas my old functioning instance uses Python 2.7. Still, I've tried running the Playbook with different versions of Ansible and in both Python2 and Python3 venvs. Again, no issue with CLI execution with "ansible-playbook foo.yml".
Tried stopping all Containers, did a Docker system prune -a, deleted the cloned awx repo, and re-cloned/re-installed AWX. I've even tried pointing to both an internal and external assets database, but still no change.
Hopefully someone else has encountered this bizarre problem.
Thanks, Community!

Fastlane working on terminal not on Jenkins

I am able to configure Fastlane locally and working well with terminal, but when I am trying to run it with Jenkins(I have configured Jenkins locally on my macbook) it is failing every-time(i have installed ruby 2.5.0 again).
Any help on the same would be highly appreciated.
I am attaching SS for your reference.
Jenkins run its build scripts using specified user 'jenkins'. You might want to check if 'jenkins' user had installed requires dependencies to run fastlane, for e.g ruby ...
Have you set up your PATH in Jenkins? In the configuration of your node, in the environment variables section, you'll want to include /usr/local/bin/ with Jenkins's PATH by entering /usr/local/bin/:$PATH.

Need to Install Concourse(CI/CD) on windows system

I need to install Concourse(CI/CD) on my Local windows machine
Below process I followed :
Install Bosh on local system.
It was successfully install and while executing command at command prompt
then it show version all "bosh" -- "version 3.0.1-712bfd7-2018-03-13T23:26:43Z".
Try Download the concourse-lite deployment manifest file but it fails it shows below error.
Follow the below link to install Concourse :
https://concoursetutorial.com/ --- section For Windows:
I don't reccomend doing this at all because you'll be swimming so far out of the main stream that you'll find tons of issues and no one is going to care enough to want to fix them.
Even if you didn't find any issues, resources require a linux worker for anything to work so your going to need linux anyways.
I recommend running your db, web and linux worker on linux and then running windows workers as needed.

Docker workflow for scientific computing

I'm trying to imagine a workflow that could be applied on a scientific work environment. My work involves doing some scientific coding, basically with Python, pandas, numpy and friends. Sometimes I have to use some modules that are not common standards in the scientific community and sometimes I have to integrate some compiled code in my chain of simulations. The code I run is most of the time parallelised with IPython notebook.
What do I find interesting about docker?
The fact that I could create a docker containing my code and its working environment. I can then send the docker to my colleges, without asking them to change their work environment, e.g., install an outdated version of a module so that they can run my code.
A rough draft of the workflow I have in mind goes something as follows:
Develop locally until I have a version I want to share with somebody.
Build a docker, possibly with a hook from a git repo.
Share the docker.
Can somebody give me some pointers of what I should take into account to develop further this workflow? A point that intrigues me: code running on a docker can lunch parallel process on the several cores of the machine? e.g., an IPython notebook connected to a cluster.
Docker can launch multiple process/thread on multiple core. Multiple processes may need the use of a supervisor (see : https://docs.docker.com/articles/using_supervisord/ )
You should probably build an image that contain the things you always use and use it as a base for all your project. (Would save you the pain of writing a complete Dockerfile each time)
Why not develop directly in a container and use the commit command to save your progress on a local docker registry? Then share the final image to your colleague.
How to make a local registry : https://blog.codecentric.de/en/2014/02/docker-registry-run-private-docker-image-repository/
Even though you'll have a full container, I think a package manager like conda can still be a solid part of the base image for your workflow.
FROM ubuntu:14.04
RUN apt-get update && apt-get install curl -y
# Install miniconda
RUN curl -LO http://repo.continuum.io/miniconda/Miniconda-latest-Linux-x86_64.sh
RUN bash Miniconda-latest-Linux-x86_64.sh -p /miniconda -b
RUN rm Miniconda-latest-Linux-x86_64.sh
ENV PATH=/miniconda/bin:${PATH}
RUN conda update -y conda
* from nice example showing docker + miniconda + flask
Wrt doing source activate <env> in the Dockerfile you need to:
RUN /bin/bash -c "source activate <env> && <do something in the env>"