GitHub Action runner : Problem in configuring as a service - github

I have self hosted runner. But my runner is not online always. Its turned off even though it's configured as a service. But my machine is up.
While starting the service it gives a below problem.
Expected output of service need to be like below:
Any hints?
Followed this page to configure :https://docs.github.com/en/actions/hosting-your-own-runners/configuring-the-self-hosted-runner-application-as-a-service

It looks like your problem is with the connection to GitHub.
Some things to double check -
Make sure the host can connect to GitHub (running a ping should prove this)
Make sure the installation of the self hosted runner was done with the correct token.

Related

Unable to add remote node in Rundeck 4.9.0

Following the doc from Rundeck, however the only button I have under "Sources tab" is "ResourceModelSource"
When I click that button I get a blank
PPS Issue happened on previous version - new to RunDeck, so I can't say that it EVER worked
I tried adding a manual resouces.xml in the project director y(Which I had to manually create, which tells me that's another issue) and reloading RD but that did not seem to work
While it's not the likely cause, I'll mention it here incase it IS relevant, I'm hosting on port 4440 however I'm using nginx to forward http (not https) requests on 443 to 4440, this is due to corp net sec policy.
I'm sure it's something where it's having an i/o issue on the local host, however I'm not seeing anything in the logs.
That is a known issue when you have Rundeck installed behind a proxy server, take a look at this: https://github.com/rundeck/rundeck/issues/6278 the solution is to set the grails.ServerURL (rundeck-config.properties file) with the exit URL defined for Rundeck in your proxy server (e.g: grails.serverURL=http://my_domain/rundeck), then restart the Rundeck service.

gitlab backup restore affects url redirection

I got a production server (ip:172.24.4.10) where GitLab 8.15.3 is installed.
Then I made a GitLab backup and I transferred the file to a test server(ip:172.24.4.50).
When I'm using a browser, I go to http://www.mygitlab.com which aims to ip 172.24.4.10.
The test server has same GitLab version and I executed the restore from backup file and it worked.
Even though, when I use the browser I go to http://172.24.4.50, it redirects to http://www.mygitlab.com.
It wasn't happening before restoration on test server.
I was checking gitlab, gitlab-nginx config files and I'm not finding something related to http://www.mygitlab.com.
What can I do?
P.D.
I put http://www.mygitlab.com as example.
My PC was restarted becasuse there is a job in charged to restart PCs. After that, I used the browser going to http://172.24.4.50 and it started to work. So I think it was a cache browser issue and I didn't make any changes to gitlab config files.
If you haven't transfered/copied over your NGiNX settings, then it is a GitLab configuration issue.
Said configuration (for example in /etc/gitlab/gitlab.rb) does include:
external_url "http://gitlab.example.com"
Do check if the redirection comes from there.

Whatsapp Business API production setup not working

I am trying to configure or setup the production environment of whatsapp business api as mentioned in the link https://developers.facebook.com/docs/whatsapp/installation/prod-single-instance
I have done everything mentioned in this my dockers are also running on port:9090 as can be seen in the image
still I can't access it. Whenever I try to call https://localhost:9090 the error with "This site can’t be reached" occurs. Whatsapp business api does not have good documentation or tutorials till now. So this site is the only last way for me.
I had a similar problem which could be your case, I saw the docker containers OK but nothing was working. After a day searching I saw where it happened, my problem was I installed mysql MANUALLY (not docker container) in the same instance where docker is running and in db.env I just used 127.0.0.1, this was passed literally to docker container, then looking at a the wait_on_mysql.sh script, the whastapp docker containers were waiting util the mysql ip has conectivity to actually do something and was printing "MySQL is not up yet - sleeping" each second, of course they wouldn't find any conectivity.
Since my instalation is for development, and I am already using such database to other stuff, my solution was to use the 172.17.0.1(docker gateway of the containers) IP instead, then add two sets of network iptables rules to the host to redirect from the docker containers IP to the IP binded by mysql when using such port (3306, the default in my case). After that everything works well. I think there are better solutions, but I didn't want to go far on it, you should evaluate you case if apply.
check the command:
docker-compose logs > debug_output.txt
That gives you insight about whats happening, hope it can helps someone.
I think your setup is already complete. You just need to start with the registration process and start sending messages. The containers are up and running but calling https://localhost:9090 won't send you any response as this is not any specified API endpoint expected to be used.
Since you're using prod single instance, the documentation can be found here which seems pretty straight forward. https://developers.facebook.com/docs/whatsapp/installation/prod-single-instance
You seem to have completed till the 7 steps. The next step can be to perform a health check to make sure it is healthy. The API endpoint for that would be https://localhost:9090/v1/health https://developers.facebook.com/docs/whatsapp/api/health
Has your db also been setup?
I cannot see it in the docker screenshot.
Also - you have to accept the certificate, as it does not have a public CA issues certificate.

Concourse result keeps loading

I'm new to concourse and really excited to start working with it but I have a problem running the hello world example described here: https://concourse-ci.org/hello-world.html
I'm running this example on a concourse docker setup described here: https://concourse-ci.org/docker-repository.html.
Everything seems to work just fine but when I want to verify the results of both examples it keeps saying loading:
Task result loading (image)
Any idea why this would happen? I'm running docker-compose on Mac OS X (El Capitan) but that shouldn't matter right? Is there some additional configuration that I'm missing?
I also noticed when checking the network trace that the following request doesn't return any value: /api/v1/builds/<buildnumber>/events
It keeps saying 'pending'. Is that normal? I assume it isn't but I don't know the cause of this. Is there any logging I can check?
EDIT:
It seems to have something to do with the fact that it isn't running on localhost. When I use port forwarding and open concourse on localhost:8080 the results are shown just fine. Also mapping another hostname to 127.0.0.1 with port forwarding enabled works. So only when I communicate directly with the opened docker ports it doesn't work. Am I missing something?
After much frustration I found out that to cause of this issue was that Sophos Anti-Virus was blocking Concourse server-side events...
https://community.sophos.com/products/free-antivirus-tools-for-desktops/f/sophos-anti-virus-for-mac-home-edition/5750/sophos-av-blocks-server-sent-events-sse-on-mac-os-x-yosemite

AWS deployment without using SSH

I've read some articles recently on setting up AWS infrastructure w/o enabling SSH on Ec2 instances. My web app requires a binary to run. So how can I deploy my application to an ec2 instance w/o using ssh?
This was the article in question.
http://wblinks.com/notes/aws-tips-i-wish-id-known-before-i-started/
Although doable, like the article says, it requires to think about servers as ephemeral servers. A good example of this is web services that scale up and down depending on demand. If something goes wrong with one of the servers you can just terminate your server and spin up another one.
Generally, you can accomplish this using a pull model. For example at bootup pull your code from a git/mecurial repository and then execute scripts to setup your instance. The script will setup all the monitoring required to determine whether your server and application are up and running appropriately. You would still need an SSH client for this if you want to pull your code using ssh. (Although you could also do it through HTTPS)
You can also use configuration management tools that don't use ssh at all like Puppet or Chef. Essentially your node/server will pull all your application and server configuration from the Puppet master or the Chef server. The Puppet agent or Chef client would then perform all the configuration/deployment/monitoring changes for your application to run.
If you with this model I think one of the most critical components is monitoring. You need to know at all times if there's something wrong with one of your server and in the event something goes wrong discard the server and spin up a new one. (Even better if this whole process is automated)
Hope this helps.