upload files to github directory using github api - github

I want to upload files from my system to a directory in github repo using the api .Is there any api endpoint which allows me to do that.

You should use the GitHub CRUD API which wans introduced in .May 2013
It includes:
File Create
PUT /repos/:owner/:repo/contents/:path
File Update
PUT /repos/:owner/:repo/contents/:path
File Delete
DELETE /repos/:owner/:repo/contents/:path

There's deffinitly a way to do it.
This is how I did it;
curl -i -X PUT -H ‘Authorization: token 9xxxxxxxxxxxxxxxxxxxxxxxe2’ -d
‘{“message”: “uploading a sample pdf”,
“content”:”bXkgbm……………………………..”
}’ https://api.github.com/repos/batman/toys/contents/sample.pdf
Where the content property is a base64 encoded string of characters. I used this tool to encode my pdf file. https://www.freeformatter.com/base64-encoder.html
Notice, "batman" is the owner, "toys" is my repo, "contents" has to be there by default, and sample.pdf would the name of the file you want to upload your file as.
In short, stick to this format: /repos/:owner/:repo/contents/:path
And you can run the identical step for any of these files:
PNG (.png)
GIF (.gif)
JPEG (.jpg)
Log files (.log)
Microsoft Word (.docx), Powerpoint (.pptx), and Excel (.xlsx) documents
Text files (.txt)
PDFs (.pdf)
ZIP (.zip, .gz)
Good luck.
Btw, I have these same details added on here: http://www.simplexanswer.com/2019/05/github-api-how-to-upload-a-file/

Related

Unzip gzip files in Azure Data factory

I am wondering if it is possible to set up a source and sick in ADF that will unzip a gzip file and shows the extracted txt file. What happened is that the sink was incorrectly defined where both the source/sink had gzip compression.
So what ended up is that "fil1.gz" is now "file1.gz.gz".
This is how the file looks in Azure blob:
This is how the file looks like in an S3 bucket (the end is cut off, but the end is "txt.gz"):
I saw that in COPY there is Zipdeflate and deflate compression, but I get an error that it does not support this type of activity.
I created a sink in an ADF pipeline where I am trying to unzip it. In the datasource screen I used Zipdeflate, but it puts the file name with "deflate" extention, and not with the 'txt'.
Thank you
create a "copy data" object
Source:
as your extenstion is gz, you should choose GZip as compresion type, tick binary copy
Target:
Blob Storage Binary
compresion- none
Such copy pipeline will unzip your text file(s)

Youtube-dl how to get a direct download link to the merged file without creating a temp file on server

Is there any way to create a direct download link to the merged file without creating a temp file on the server in youtube-dl?
youtube-dl -f 255+160 https://youtu.be/p-flvm1szbI
The above code will merge the file and output the merged file.
I want to allow users to directly download the merged file to their computers -- without creating any temp file on my server. Is this possible?
(Creating a temp file and then letting the user download it is already possible.)

Rename downloaded files with Wget -i

I am trying to download bulk images from URL'S listed in text file.
The command I am using is
wget -i linksfile.txt
The url structure of images in linksfile.txt is like below
www.domainname.com/197507/1-foto-000.jpg?20180711125016
www.domainname.com/197507/2-foto-000.jpg?20180711125030
www.domainname.com/197507/3-foto-000.jpg?20180711125044
www.domainname.com/197507/4-foto-000.jpg?20180711125059
Download images are being saved with filenames as
1-foto-000.jpg?20180711125016
2-foto-000.jpg?20180711125030
3-foto-000.jpg?20180711125044
4-foto-000.jpg?20180711125059
How can I omit all the text after .jpg ? I want file names to be saved as
1-foto-000.jpg
2-foto-000.jpg
3-foto-000.jpg
4-foto-000.jpg
and If possible can filenames be saved as
197507-1-foto-000.jpg
197507-2-foto-000.jpg
197507-3-foto-000.jpg
197507-4-foto-000.jpg
197507 is the folder name where images are hosted on server
I read tutorials on file name changing, Most of them are focused on downloading single file and using wget -o to change file name,, Is there any way we implement in above scenario ?
Maybe --content-disposition would do the trick.

How can i download a report using wget

I am trying to use WGET to automate download of a file which is generated by a report server (PDF format). However, the problem i am having is that the file name is never known (generated randomly by the server) and the URL accepts parameters that will change eg. Date=. Name= ID= etc.
For Example if i were to pass http://url.com/date=&name=&id= in internet explorer, i will get a download dialog prompting me to download a file with file xyz123.pdf
Is it possible that i can use WGET to pass these parameters to the report server and automatically download the generated PDF file
Just put the full url in quotes - It should go and fetch the file:
wget "http://url.com/date=foo&name=baa&id=baz"
Thanks,
//P

How to extract .gz file with .txt extension folder?

I'm currently stuck with this problem where my .gz file is "some_name.txt.gz" (the .gz is not visible, but can be recognized with File::Type functions),
and inside the .gz file, there is a FOLDER with the name "some_name.txt", which contains other files and folders.
However, I am not able to extract the archive as you would manually (the folder with the name "some_name.txt" is extracted along with its contents) when calling the extract function from the Archive::Extract because it will just extract the "some_name.txt" folder as a .txt file.
I've been searching the web for answers, but none are correct solutions. Is there a way around this?
From Archive::Extract official doc
"Since .gz files never hold a directory, but only a single file;"
I would recommend using tar on the folder and then gz it.
That way you can use Archive::Tar to easily extract specific file:
Example from official docs:
$tar->extract_file( $file, [$extract_path] )
Write an entry, whose name is equivalent to the file name provided to disk. Optionally takes a second parameter, which is the full native path (including filename) the entry will be written to.
For example:
$tar->extract_file( 'name/in/archive', 'name/i/want/to/give/it' );
$tar->extract_file( $at_file_object, 'name/i/want/to/give/it' );
Returns true on success, false on failure.
Hope this helps.
Maybe you can identify these files with File::Type, rename them with .gz extension instead of .txt, then try Archive::Extract on it?
A gzip file can only contain a single file. If you have an archive file that contains a folder plus multiple other files and folders, then you may have a gzip file that contains a tar file. Alternatively you may have a zip file.
Can you give more details on how the archive file was created and a listing of it contents?