Bash script that pulls subfolder from github - github

I've made a simple script in bash to get subfolders from my repo on GitHub.
The problem i'm having is that I want to be able to run the script several times getting several subfolders. The script currently run once. Then i get the message "branch master already up to date" how can I change it so this does not happen and I can pull several folders?
getFolder()
{
repository="$1"
folder="$2"
remote="$3"
branch="$4"
if [ "$repository" = school ]
then
repository=https://github.com/mergin/School
fi
git init
git remote add "$remote" "$repository"
git config core.sparsecheckout true
echo "$folder"/ > .git/info/sparse-checkout
git pull "$remote" "$branch"
}
This is the current function that i use.

Related

Azure Pipelines Get Latest files only

I'm looking for a way to create an artifact that I can attach to a deployment pipeline that only contains the files that were changed in the commits that triggered this build.
What I have is a repo that has change scripts for database objects, so I want to package up only the change scripts for the last commit into a zip file and attach it to the build outputs. That way I can take zip file and apply each of the files on top of the database, this will be done later in a different step, right now I'm just trying to get all of the files that were changed.
Editted
I have created the following step in the YAML file based on the comments below
- powershell: |
#get the changed template
echo "git diff-tree --no-commit-id --name-only -r $(Build.SourceVersion)"
$a = git diff-tree --no-commit-id --name-only -r $(Build.SourceVersion)
#assign the filename to a variable
echo "Files"
echo "##vso[task.setvariable variable=fileName]$a"
- powershell: |
#Print Files
$fileName: echo "$env:fileName"
Below is the result, you can see that no files are changed. Here I changed the Readme file, which triggered the build.
Not sure if this would help you, but hopefully will point in the right direction.
Assuming you have Git as source control. Have you considered to query those changes using Git instead? (I haven't tried, but I bet you'll be able to find in the pipeline metadata which merge triggered the build, and then use it to query Git for the file changes.
Have a look at this question in Stackoverflow
Hope it will help.
If you are using Git version control, you could try to add a script task to get the changed file names in your pipeline, copy them to artifact directory and then publish them.
It is easy to get the changed files using git commands git diff-tree --no-commit-id --name-only -r commitId. When you get the changed file's name, you need to assign it to a variable using expression ##vso[task.setvariable variable=VariableName]value. Then you can use this variable in the copy and publish task.
You can check below yaml pipeline for example:
- powershell: |
#get the changed template
$a = git diff-tree --no-commit-id --name-only -r $(Build.SourceVersion)
#assign the filename to a variable
echo "##vso[task.setvariable variable=fileName]$a"
- powershell: |
echo "$env:fileName"

Polymer 2.0 upload to GitHub-Pages

I have problem with uploading my Polymer component into gh pages.
I'm try this from tutorial:
# git clone the Polymer tools repository somewhere outside of your
# element project
git clone git://github.com/Polymer/tools.git
# Create a temporary directory for publishing your element and cd into it
mkdir temp && cd temp
# Run the gp.sh script. This will allow you to push a demo-friendly
# version of your page and its dependencies to a GitHub pages branch
# of your repository (gh-pages). Below, we pass in a GitHub username
# and the repo name for our element
../tools/bin/gp.sh <username> <test-element>
# Finally, clean-up your temporary directory as you no longer require it
cd ..
rm -rf temp
But it's not working.
In terminal I have this errors:
There is something I'm, missing?
Here is your problem:
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
For the script to run as intended, you need to add your public ssh key to your github project. Settings -> Deploy keys -> Add deploy Key.
Alternatively, you can manually execute the steps in gp.sh that involve pulling from and pushing to github.
If you don't feel like splitting up the script, try running the commands manually, that should work. The only multi-line command in the script is this one:
echo "{
\"directory\": \"components\"
}
" > .bowerrc
Good luck.

Pushing entire Rundeck configuration to github

I want to push entire Rundeck configuration to Github. Is there any way for doing this ?
Considering the Rundeck configuration layout, all you would need to do is:
cd $RDECK_BASE
git init .
git remote add origin /url/new/empty/GitHub/repo
echo '*'>.gitignore
echo '!/etc/'>.gitignore
echo '!/server/'>.gitignore
echo '/server/*'>.gitignore
echo '!/server/config/'>.gitignore
git add .
git commit -m "Rundeck config"
git push -u master
Basically, you need to ignore what is not configuration before adding everything else (ie, the config files), and pushing to your own GitHub repo.
Make sure those files don't have sensitive credential information in them though (or at least push them to a private GitHub repo if you have one)

SCM environment variables missing

Usually, when using SCM like the Git Plugin, there are a bunch of environment variables that you can use (e.g. see these)
But neither the Git Step nor the Generic SCM seem to do that.
Is there a way to get these variables into the groovy env.* so that they can be used?
Something like this would be useful:
def commitMessage = sh 'git log --max-count=1 --oneline --no-merges | cut -b9-'
I can think of writing the results to a file and read them via the readFile() mehtod -- but is there an easier way to achieve this?
For the Record: I have the following code to get the branch-name:
stage 'preparation'
node {
// checkout branch
git branch: 'origin/master', url: 'git#example.net:project.git'
// write current branch-name to file
sh 'git branch -a --contains `git rev-parse HEAD` | grep origin | sed \'s!\\s*remotes/origin/\\(.*\\)!\\1!\' > git-branch.txt'
// read data from file into environment-variable
env.gitBranch = readFile('git-branch.txt').trim()
// let people know what's up
echo "testing branch ${env.gitBranch}"
}
The remainder of the flow-script is comprised of serveral parametrized jobs which get the env.gitBranch passed as parameter (among others, if needed).
Be sure to allow concurrent builds for the workflow to catch every updated branch.
See JENKINS-24141; these variables are not yet available from Workflow.
In the meantime, you are on the right track: run a git command to record any information you need, and use readFile to load it (see also JENKINS-26133).

Mercurial hook not executing properly

This should be a very simple thing to have run, but for some reason it won't work with my Mercurial repository. All I want is for the remote repo to automatically run hg update whenever someone pushes to it. So I have this in my .hg/hgrc file:
[hook]
changegroup = hg update
Simple, right? But for some reason, this never executes. I also tried writing a shell script that did this. .hg/hgrc looked like this:
[hooks]
changegroup = /home/marc/bin/hg-update
and hg-update looked like this:
#!/bin/sh
hg help >> /home/marc/works.txt;
hg update >> /home/marc/works.txt;
exit 0;
But again, this doesn't update. The contents of hg help are written out to works.txt, but nothing is written out for hg update. Is there something obvious I'm missing here? This has been plaguing me for days and I just can't seem to get it to work.
Update
Okay so again, using the -v switch on the command line from my workstation pushing to the remote repo doesn't print any verbose messages even when I have those echo lines in .hg/hgrc. However, when I do a push from a clone of the repo on the same filesystem (I'm logged in via SSH), this is what I get:
bash-3.00$ hg -v push ../test-repo/
pushing to ../test-repo/
searching for changes
1 changesets found
running hook prechangegroup: echo "Remote repo is at `hg tip -q`"
echo "Remote repo wdir is at `hg parents -q`"
Remote repo is at 821:1f2656753c98
Remote repo wdir is at 821:1f2656753c98
adding changesets
adding manifests
adding file changes
added 1 changesets with 1 changes to 1 files
running hook changegroup: echo "Updating.... `hg update -v`"
echo "Remote repo is at `hg tip -q`"
echo "Remote repo wdir is at `hg parents -q`"
Updating.... resolving manifests
getting license.txt
1 files updated, 0 files merged, 0 files removed, 0 files unresolved
Remote repo is at 822:389a6c7276c6
Remote repo wdir is at 822:389a6c7276c6
So it works, but again only when I push from the same filesystem. It doesn't work if I try pushing to the repo from another workstation over the network.
Well, after going through the same steps of frustration as Marc W did a while ago, I finally found the solution to the problem, at least when remote serving is done with the hgwebdir WSGI script.
I found out that when using this kind of remote push via HTTP or HTTPS, Mercurial simply ignores everything you write into the .hg/hgrc file or your repository. However, entering the hook in the hgwebdir config does the trick.
So if the bottom line in your hgwebdir.wsgi script is something like
application = hgwebdir('hgweb.config')
the [hooks] config section needs to go into the mentioned hgweb.config.
One drawback is that these hooks are executed for every repository listed in the [paths] section of that config. Even though HG offers another WSGI-capable function (hgweb instead of hgwebdir) to serve only a single repository, that one doesn't seem to support any hooks (neither does it have any config).
This can, however, be circumvented by using a hgwebdir as described above and having some Apache RewriteRule map everything into the desired subdirectory. This one works for me:
RewriteEngine On
RewriteCond %{REQUEST_URI} !^/reponame
RewriteRule ^(.*)$ reponame/$2 [QSA]
Have fun using your remote hooks over HTTP :D
I spent some time researching this myself. I think the answer to problem is described concisely here:
Output has to be redirected to stderr (or /dev/null), because stdout
is used for the data stream.
Basically, you're not redirecting to stderr, and hence polluting stdout.
First of all, I want to correct a few comments above.
Hooks are invoked also when pushing over file system.
It is not necessary to keep the hook in the repo on which you want them to operate. You can also write the same hook as in your question on the user end. You have to change the event from changegroup to outgoing and also to specify the URL of remote repo with the -R switch. Then if the pushing user has sufficient privileges on the remote repo, the hook will execute successfully.
.hg/hgrc
[hooks]
outgoing = hg update -R $HG_URL
Now towards your problem.... I suggest creating both prechangegroup and changegroup hooks and printing some debugging output.
.hg/hgrc
[hooks]
prechangegroup = echo "Remote repo is at `hg tip -q`"
echo "Remote repo wdir is at `hg parents -q`"
changegroup = echo "Updating.... `hg update -v`"
echo "Remote repo is at `hg tip -q`"
echo "Remote repo wdir is at `hg parents -q`"
And also push with the -v switch, so that you may know which hook is running. If you still can't figure out, post the output. I might be able to help.
My problem was that my hgwebdir application ran as the "hg" user, but the repository was owned by me, so I had to add in this bit of config to hgweb.config to get it to run the hooks:
[trusted]
users = me
You need to have it in the remote repositiory's hgrc. It sounds as if it's in your local repo.
Edit: It also depends on how you're pushing. Some methods don't invoke hooks on the right side. (ssh does, I think HTTP does, file system does not)
Edit2: What if you push "locally" at the remote repo's computer. You might have different users/permissions between the webserver and the hgrc-file. (See [server] and trusted directives for hgrc.)
I had the same problem pushing from Windows Eclipse via http, but after capturing stderr, I found that the full path was needed to the hg.bat file. My hooks section now looks like:
[hooks]
incoming = c:\Python27\Scripts\hg.bat update > hg_log.txt 2>>hg_err.txt
Hope this helps someone else.
SteveT
Try turning on hook debugging to see why it's not running.
Likely a permissions issue or something like that.
took a while but I got it working.
I started with
[hooks]
tag=set >&2
commit=set >&2
the >&2 pipes it to standard error so remote consoles will show it.
when remote this should output in console if it is running
hg push https://host/hg -v
It wasn't.
I was using hgweb.cgi so I switched to hgweb.wsgi with no difference.
what I discovered is that some hooks don't get called on remote.
when I switched it to
[hooks]
incoming= set >&2
the hooks tag and commit don't seem to get called but incoming and changeset do get called. I haven't confirmed the others.
now that I got it working I switched back to hgweb.cgi and everything works the same.
Tthe reason I've found for this has nothing to do with redirecting stdout to stderr. As you may see in the wiki page it is not specified on the wiki's current version
https://www.mercurial-scm.org/wiki/FAQ#FAQ.2FCommonProblems.Any_way_to_.27hg_push.27_and_have_an_automatic_.27hg_update.27_on_the_remote_server.3F
The problem I've found is around permissions.
In my original setup, I had a user, lets say hguser with a repo on its home, and a script /etc/init.d/hg.init to launch hg serve. The problem being hg serve was being run by root, while most files under the repo pertained to hguser (some of them switched to root at some point, but it won't mind, since I'll correct them with chown)
Solution:
chown -R hguser:hguser /home/hguser/repo (to correct ALL files, back to hguser)
launch su hguser -c "hg serve ..." (in my case from /etc/init.d/hg.init)
changegroup = hg update -C under [hooks] in repo/.hg/hgrc as usual
Now it should work on push
PS: in my case, I rather update to the head of a specific branch, so I use hg update -C -r staging, to make the staging server update only to the head of the intended branch, even if the tip is from another branch (like development for instance)
BTW my hg.init script ended up like this: (notice the su hguser part)
#!/bin/sh
#
# Startup script for mercurial server.
#
# #see http://jf.blogs.teximus.com/2011/01/running-mercurial-hg-serve-on-linux.html
HG=/usr/bin/hg
CONF=/etc/mercurial/hgweb.config
# Path to PID file of running mercurial process.
PID_FILE=/etc/mercurial/hg.pid
state=$1
case "$state" in
'start')
echo "Mecurial Server service starting."
(su hguser -c "${HG} serve -d --webdir-conf ${CONF} -p 8000 --pid-file ${PID_FILE}")
;;
'stop')
if [ -f "${PID_FILE}" ]; then
PID=`cat "${PID_FILE}"`
if [ "${PID}" -gt 1 ]; then
kill -TERM ${PID}
echo "Stopping the Mercurial service PID=${PID}."
else
echo Bad PID for Mercurial -- \"${PID}\"
fi
else
echo No PID file recorded for mercurial
fi
;;
*)
echo "$0 {start|stop}"
exit 1
;;
esac
PS: due credit to http://jf.blogs.teximus.com/2011/01/running-mercurial-hg-serve-on-linux.html