How to get list of tags from DesignSync server vault - version-control

I'm working with DesignSync vault contains branches
What is the way to extract list of files with attached tags?
The command I'm using is:
dssc ls -noheader -fullpath -report NG -rec sync://my.server.address.com:1234/Projects/Name

The approach is right.
This is the way to get a list of tags in DesignSync vault

Related

Pull Request "reviewers" using github "history"

Is there any way (for on premise github) to :
For N number of files in the Pull Request.
Look at the history of those files.
And add any/all github users (on the history) .. to the code reviewers list of users?
I have searched around.
I found "in general" items like this:
https://www.freecodecamp.org/news/how-to-automate-code-reviews-on-github-41be46250712/
But cannot find anything in regards to the specific "workflow" I describe above.
We can get the list of changed files to the text file from PR. Then we can run the git command below to get the list of users included in last version's blame. For each file we get from file list, run the blame command. This might be also simple script.
Generate txt file from list of files of PR.
Traverse all filenames through txt file. (python, bash etc.)
Run blame command and store in a list.
Add reviewers to the PR from that list manually or some JS script for it.
For github spesific: list-pull-requests-files
The blame command is something like :
git blame filename --porcelain | grep "^author " | sort -u
As a note, if there are users who are not available in github anymore. Extra step can be added after we get usernames to check whether they exist or not. (It looks achievable through github API)

Is there a way to bulk/batch download all repos from Github based on a search result?

I run this search on Guthub and I get 881 repos. Blazor & C# repos.
https://github.com/search?l=C%23&q=blazor&type=Repositories
Is there a way to download all these repos easily instead of one by one?
Yes, your query can be run via the github search api:
https://api.github.com/search/repositories?q=blazor+language:C%23&per_page=100&page=1
That gives you one page of 100 repositories. You can loop over all pages, extract the ssh_url (or http if you prefer), and write the result to a file:
# cheating knowing we currently have 9 pages
for i in {1..9}
do
curl "https://api.github.com/search/repositories?q=blazor+language:C%23&per_page=100&page=$i" \
| jq -r '.items[].ssh_url' >> urls.txt
done
cat urls.txt | xargs -P8 -L1 git clone
You can optimize to extract the number of pages from the response headers.
References:
https://developer.github.com/v3/search/
Parsing JSON with Unix tools
How to apply shell command to each line of a command output?
Running programs in parallel using xargs
Similar question:
GitHub API - Different number of results for jq filtered response

Jenkins Powershell Output

I would like to capture the output of some variables to be used elsewhere in the job using Jenkins Powershell plugin.
Is this possible?
My goal is to build the latest tag somehow and the powershell script was meant to achieve that, outputing to a text file would not help and environment variables can't be used because the process is seemingly forked unfortunately
Besides EnvInject the another common approach for sharing data between build steps is to store results in files located at job workspace.
The idea is to skip using environment variables altogether and just write/read files.
It seems that the only solution is to combine with EnvInject plugin. You can create a text file with key value pairs from powershell then export them into the build using the EnvInject plugin.
You should make the workspace persistant for this job , then you can save the data you need to file. Other jobs can then access this persistant workspace or use it as their own as long as they are on the same node.
Another option would be to use jenkins built in artifact retention, at the end of the jobs configure page there will be an option to retain files specified by a match (e.g *.xml or last_build_number). These are then given a specific address that can be used by other jobs regardless of which node they are on , the address can be on the master or the node IIRC.
For the simple case of wanting to read a single object from Powershell you can convert it to a JSON string in Powershell and then convert it back in Groovy. Here's an example:
def pathsJSON = powershell(returnStdout: true, script: "ConvertTo-Json ((Get-ChildItem -Path *.txt) | select -Property Name)");
def paths = [];
if(pathsJSON != '') {
paths = readJSON text: pathsJSON
}

gsutil - is it possible to list only folders?

Is it possible to list only the folders in a bucket using the gsutil tool?
I can't see anything listed here.
For example, I'd like to list only the folders in this bucket:
Folders don't actually exist. gsutil and the Storage Browser do some magic under the covers to give the impression that folders exist.
You could filter your gsutil results to only show results that end with a forward slash but this may not show all the "folders". It will only show "folders" that were manually created (i.e., not implicitly exist because an object name contains slashes):
gsutil ls gs://bucket/ | grep -e "/$"
Just to add here, if you directly drag a folder tree to google cloud storage web GUI, then you don't really get a file for a parent folder, in fact each file name is a fully qualified url e.g. "/blah/foo/bar.txt" , instead of a folder blah>foo>bar.txt
The trick here is to first use the GUI to create a folder called blah and then create another folder called foo inside (using the button in the GUI) and finally drag the files in it.
When you now list the file you will get a separate entry for
blah/
foo/
bar.txt
rather than only one
blah/foo/bar.txt

sts adm command for MOSS 2007

I'm looking for the stsadm command that will list all site collections and their size on disk. Thanks.
This will output the site collection list into a file called sites.xml, which can be used in other stsadm commands, like mergecontentdbs
stsadm -o enumsites -url http://MySharePointSite > C:\sites.xml
Note that this can take a long time if there are lots of site collections - it takes 2 hours on my SharePoint system.
Edit: Added commands for subwebs:
For the subwebs, you can use the enumsubwebs command, passing it the root url of the site collection to enumerate:
stsadm -o enumsubwebs -url http://MySharePointSite
However, this doesn't give you the size of the web, nor does it output the value for the root web in the site collection, nor does it recurse, so you don't get the subwebs of the subwebs. Its just a list of top-level subwebs.
You can get a complete list of all the webs in a content database with the new enumallwebs command:
stsadm -o enumallwebs -databasename MyContentDatabaseName
Unfortunately, it doesn't give you the size of the web either.