Bitbucket REST API to do search in remote private master repository - rest

Is there any way I can search for a specific string in a private master repositories in butbucket.
Below is the code I use to get the clone command and then I download the files and do a grep on them now. But is it possible to search directlt? I need the output like
Full file Path, Search result (Line with the word I am searching for)
set -e
echo -n '' > clone-repos.sh
chmod +x clone-repos.sh
ONPREM_USER=user1
ONPREM_PASS=pass1
ONPREM_PROJECT=project1
curl -s -u "$ONPREM_USER:$ONPREM_PASS" https://bitbucket.bmogc.net/rest/api/1.0/projects/$ONPREM_PROJECT/repos/\?limit=1000 | ./jq-win64.exe -r '.values[] | {slug:.slug, links:.links.clone[] } | select(.links.name=="http") | "git clone \(.links.href) \(.slug)"' >> clone-repos.sh

Related

How can I batch archive all GitHub repositories in my account?

How do I batch archive my repositories? I'd preferably want to be able to sort through them and figure out a way to not archive my active repositories.
I have hundreds of old GitHub repositories in my account since before the GitHub notifications feature, and now I get vulnerability notifications for all of them. Here's what my notifications look, for projects that were last used maybe 6 years ago:
You can use the GitHub API along with two tools to achieve this. I'll be using:
Hub, but you can make direct API calls
jq, but you can use any JSON parser
Here's how:
Fetch a list of all the GitHub repositories in our account, and saving them in a file:
hub api --paginate users/amingilani/repos | jq -r '.[]."full_name"' > repos_names.txt
Go through that file manually, remove any repositories you don't want to archive
Archive all the repositories in the file:
cat repos_names.txt | xargs -I {} -n 1 hub api -X PATCH -F archived=true /repos/{}
Note: since 2020:
gh repo list has been released (with gh 1.7.0 in commit 00cb921, Q1 2021): it does take pagination in account, as it is similar to an alias like:
set -e
repos() {
local owner="${1?}"
shift 1
gh api graphql --paginate -f owner="$owner" "$#" -f query='
query($owner: String!, $per_page: Int = 100, $endCursor: String) {
repositoryOwner(login: $owner) {
repositories(first: $per_page, after: $endCursor, ownerAffiliations: OWNER) {
nodes {
nameWithOwner
description
primaryLanguage { name }
isFork
pushedAt
}
pageInfo {
hasNextPage
endCursor
}
}
}
}
' | jq -r '.data.repositoryOwner.repositories.nodes[] | [.nameWithOwner,.pushedAt,.description,.primaryLanguage.name,.isFork] | #tsv' | sort
}
repos "$#"
gh repo list --no-archived can limit the list to your not-yet-archived repositories
gh repo archive can then, for each element of that list, archive the GitHub repository.
wolfram77 also proposes in the comments:
gh repo list <org> | awk '{NF=1}1' | \
while read in; do gh repo archive -y "$in"; done
Using only gh.
gh repo list --no-archived --limit 144 --visibility public --source --json nameWithOwner --jq ".[].nameWithOwner" > repos_names.txt
Set --limit to the number of repositories you have.
Use vim to delete line which you don't want to archive by pressing dd on the line:
vim repos_names.txt
Run the the following command to arrive them
cat repos_names.txt | while read in; do gh repo archive -y "$in"; done
Clear after:
rm repos_names.txt

There is a way to batch archive GitHub repositories based off of a search?

From the answer to a related question I know it's possible to batch clone repositories based on a GitHub search result:
# cheating knowing we currently have 9 pages
for i in {1..9}
do
curl "https://api.github.com/search/repositories?q=blazor+language:C%23&per_page=100&page=$i" \
| jq -r '.items[].ssh_url' >> urls.txt
done
cat urls.txt | xargs -P8 -L1 git clone
I also know that the Hub client allows me to make API calls.
hub api [-it] [-X METHOD] [-H HEADER] [--cache TTL] ENDPOINT [-F FIELD|--input FILE]
I guess the last step is, how do I archive a repository with Hub?
You can update a repository using the Update a Repository API call.
I put all my repositories in a TMP variable in the following way, and ran the following:
echo $TMP | xargs -P8 -L1 hub api -X PATCH -F archived=true
Here is a sample of what the $TMP variable looked like:
echo $TMP
/repos/amingilani/9bot
/repos/amingilani/advent-of-code-2019
/repos/amingilani/alan
/repos/amingilani/annotate_models

Initialise and pull terraform public modules using GitHub SSH private key

Context:
I have gitlab runners which are executing terraform init command which is pulling all necessary terraform modules. Recently, I started hitting github throttling issues (60 calls to github api per hour). So I am trying to reconfigure my pipeline so it uses Github user's private key.
Currently, I have the following in my pipeline but it still doesn't seem to work and private key isn't used to pull the terraform modules.
- GITHUB_SECRET=$(aws --region ${REGION} ssm get-parameters-by-path --path /github/umotifdev --with-decryption --query 'Parameters[*].{Name:Name,Value:Value}' --output json);
- PRIVATE_KEY=$(echo "${GITHUB_SECRET}" | jq -r '.[] | select(.Name == "/github/umotifdev/private_key").Value' | base64 -d);
- PUBLIC_KEY=$(echo "${GITHUB_SECRET}" | jq -r '.[] | select(.Name == "/github/umotifdev/public_key").Value' | base64 -d);
- mkdir -p ~/.ssh;
- echo "${PRIVATE_KEY}" | tr -d '\r' > ~/.ssh/id_rsa;
- chmod 700 ~/.ssh/id_rsa;
- eval $(ssh-agent -s);
- ssh-add ~/.ssh/id_rsa;
- ssh-keyscan -H 'github.com' >> ~/.ssh/known_hosts;
- ssh-keyscan github.com | sort -u - ~/.ssh/known_hosts -o ~/.ssh/known_host;
- echo -e "Host github.com\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config;
- echo ${PUBLIC_KEY} >> ~/.ssh/authorized_keys
The error I am seeing in my pipeline is something like (which is basically throttling from github):
Error: Failed to download module
Could not download module "vpc" (vpc.tf:17) source code from
"https://api.github.com/repos/terraform-aws-modules/terraform-aws-vpc/tarball/v2.21.0//*?archive=tar.gz":
bad response code: 403.
Anyone can advise how to resolve an issue where private key isn't used to pull terraform modules?

How to export log files from Travis CI to GitHUB?

I am using Travis CI.org (Public repo) to execute my Builds and log is printing on Travis Home Page (Log File). I want to extract the log file/ send the log file to Git HUB or to any other external open source tool to access it.
Could you please let us know how to achieve this?
We can deploy build artifacts to S3 : Paste the below code in .travis.yml file if you are using github and S3.
after_failure:
addons:
artifacts:
paths:
- $(git ls-files -o | tr "\n" ":")
deploy:
- provider: s3
- access_key_id: $ARTIFACTS_KEY
- secret_access_key: $ARTIFACTS_SECRET
- bucket: $ARTIFACTS_BUCKET
- skip_cleanup: true
- acl: public_read
Also, if you want to send it free open source tool, you can use chunk.io. Place the below code in a shell script and call this from after_failure section from .travis.yml file:
cd path/to/directory/where/untracked files store/
count=$(git ls-files -o | wc -l)
git ls-files -o
echo ">>>>>>>>> CONTAINERS LOG FILES <<<<<<<<<<<<"
for (( i=1; i<"$count";i++ ))
do
file=$(echo $(git ls-files -o | sed "${i}q;d"))
echo "$file"
cat $file | curl -sT - chunk.io
done
echo " >>>>> testsummary log file <<<< "
cat testsummary.log | curl -sT - chunk.io

Need command or script to download all repositories (in .ZIP) of particular organization from Github?

Looking for particular command or python script to download all repositories or sub branches of the particular organization from Github at once
This gist (or this one) allows to list and clone all repos from an organization
curl -s https://api.github.com/orgs/twitter/repos?per_page=200 | ruby -rubygems -e 'require "json"; JSON.load(STDIN.read).each { |repo| %x[git clone #{repo["ssh_url"]} ]}'
You have the same in python with the project muhasturk/gitim
It isn't hard to curl the zip archive of a repo instead (instead of cloning the repo):
curl -u '<git username>' -L -o master.zip https://github.com/<organization>/<reponame>/zipball/master