Unable to submit for static analysis - sophoslabs-intelix

We are not able to submit file for static analysis
Tried with the request as below :
curl -X POST "https://de.api.labs.sophos.com/analysis/file/static/v1/"
-H "Authorization: <token>"
-H "Content-Type: multipart/form-data"
-F "file=#<file_path>"
(As suggested in curl request text box)
Got Error as:
{"error": "Not Found"}
As per the documentation this error means
The requested URL does not exist (Not Found).
But we are using the same URL as mentioned in Doc

I copied the curl command from text box at
https://api.labs.sophos.com/doc/analysis/file/static.html and ran it directly
It turned out that the suggested command in code block is having different URL that from list under Servers
In list server url is https://de.api.labs.sophos.com/analysis/file/static/v1 but in Curl we are getting https://de.api.labs.sophos.com/analysis/file/static/v1/(/ is appended) which in turn returns "Not Found" error
Tried after omitting / from URL and it worked

Swagger's try it out feature, that generates the curl command, is currently unoperational, but you are right in that the endpoint should accept requests with the trailing slash in the URL.
Thank you for the bug report, we will fix this in the next patch, which can be expected to roll out before the end of next week. in a later release.

Related

Is there a way to delete a github workflow

So I tried to put a docker-compose.yml file in the .github/workflows directory, of course it tried to pick that up and run it... which didn't work. However now this always shows up as a workflow, is there any way to delete it?
Yes, you can delete the results of a run. See the documentation for details.
To delete a particular workflow on your Actions page, you need to delete all runs which belong to this workflow. Otherwise, it persists even if you have deleted the YAML file that had triggered it.
If you have just a couple of runs in a particular action, it's easier to delete them manually. But if you have a hundred runs, it might be worth running a simple script. For example, the following python script uses GitHub API:
Before you start, you need to install the PyGithub package (like pip install PyGithub) and define three things:
PAT: create a new personal access GitHub token;
your repo name
your action name (even if you got deleted it already, just hover over the action on the actions page):
from github import Github
import requests
token = "ghp_1234567890abcdefghij1234567890123456" # your PAT
repo = "octocat/my_repo"
action = "my_action.yml"
g = Github(token)
headers = {'Accept': 'application/vnd.github.v3',
'Authorization': f'token {token}'}
for run in g.get_repo(repo).get_workflow(id_or_name=action).get_runs():
response = requests.delete(url=run.url, headers=headers)
if response.status_code == 204:
print(f"Run {run.id} got deleted")
After all the runs are deleted, the workflow automatically disappears from the page.
Yes, you can delete all the workflow runs in the workflow which you want to delete, then this workflow will disappear.
https://docs.github.com/en/rest/reference/actions#delete-a-workflow-run
To delete programmatically
Example (from the docs)
curl \
-X DELETE \
-H "Authorization: token <PERSONAL_ACCESS_TOKEN>"
-H "Accept: application/vnd.github.v3+json" \
https://api.github.com/repos/octocat/hello-world/actions/runs/42

How to download a file from box using wget?

I've created a direct link to a file in box:
The previous link is to the browser web interface, so I've then shared with a direct link:
However, if I download the file with a wget I receive garbage.
How can I download the file with wget?
I was able to download the file by making the link public, then replacing /s/ in the url with /shared/static
So my final command was:
curl -L https://MYUNI.box.com/shared/static/EXAMPLEtzwosac6pz --output myfile.zip
This can probably be modified for wget.
I might be a bit late to the party, but FWIW:
I tried to do the same things in order to download a folder.
I went to the box UI and opened the browser's network tab on the developer tools.
Then I clicked on download and copied as cURL the first link generated, it was something like (removed many headers and options for readability)
curl 'https://app.box.com/index.php?folder_id=122215143745&rm=box_v2_zip_folder'
The response of this request is a json object containing a link for downloading the folder:
{
"use_zpdl": "true",
"result": "success",
"download_url": <somg long url>,
"progress_reporting_url": <some other url>
}
I then executed wget -L <download_url> and was able to download the file using wget
The solution was to add the -L option to follow the HTTP redirect:
wget -v -O myfile.tgz -L https://ibm.box.com/shared/static/xxxxx.tgz
What you can do in 2022 is something like this:
wget "https://your_university.app.box.com/index.php?rm=box_download_shared_file&vanity_name=your_private_name&file_id=f_your_file_id"
You can find this link in the POST method in an incognito under Google Chrome's network tab. Note that the double quotes escape characters.

Fetch a specific ID using Rally REST and curl

I am new to rally REST. I have used curl/perl/REST api for RTC, so am familiar with it but learned mostly by using examples. I need to be able to fetch a specific ID and the Name associated with it accessing it through curl and perl scripting. For example DE46835 Name:This is my defect. I haven't found any examples to fetch just a known ID. Can you point me to any documentation for this or provide an example how to do this.
If you know the FormattedID you should be able to query using a URL like this:
https://rally1.rallydev.com/slm/webservice/v2.0/defect?query=(FormattedID = "DE46835")
The full web service api documentation is available here:
https://rally1.rallydev.com/slm/doc/webservice/
Thank you for the answer. I was able to figure it out and get it to work.
Here is what I did.
$cmd = "curl -k -u \"${user}:${password}\" \"${url}\" -c ${cookies} -o ${auth}";
system($cmd);
Then
$cmd = "curl -k \"$url\" -o ${return} -b ${cookies}";
where $url=https://rally1.rallydev.com/slm/webservice/v2.0/defect?query=(FormattedID = "DE46835")

cURL example of posting photos remotely to Facebook?

I am currently having a problem, where I try to post a picture to Facebook and I get an "error" response:
"Requires upload file", with OAuth exception #324.
I have the access token in there just fine and I can adapt my code from a cURL example relatively easily. All the examples I can find show how to do it in PHP (which I don't know) or something of the like. Any help with an example of how to upload a photo just from the cURL command line tool would be greatly appreciated.
I just can't find what I am looking for anywhere for the life of me.
Are you appending with the # character?
curl -F 'access_token=xxx' \
-F 'source=#img.jpg' \
-F 'message=Test'
'https://graph.facebook.com/me/photos'

Saving html page from MATLAB web browser

Following this question I get a message on the retrieved page that "Your browser does not support JavaScript so some functionality may be missing!"
If I open this page with web(url) in MATLAB web browser and accept certificate (once per session), the page opens properly.
How can I save the page source from the browser with a script? Or from system browser? Or may be there is a way to get that page even without browser?
url='https://cgwb.nci.nih.gov/cgi-bin/hgTracks?position=chr7:55054218-55242525';
From what I could tell the page source gets downloaded just fine, just make sure to let Javascript run when you open the saved page locally.
[...]
<script type='text/javascript' src='../js/hgTracks.js'></script>
<noscript><b>Your browser does not support JavaScript so some functionality may be missing!</b></noscript>
[...]
Note that the solution you are using only downloads the web page without any of the attached stuff (images, .css, .js, etc..).
What you can do is call wget to get the page with all of its files:
url = 'https://cgwb.nci.nih.gov/cgi-bin/hgTracks?position=chr7:55054218-55242525';
command = ['wget --no-check-certificate --page-requisites ' url];
system( command );
If you are on a Windows machine, you can always get wget from the GnuWin32 project or from one of the many other implementations.
Will saving cookies be sufficient for solving your problem? wget can do that with --keep-session-cookies and --save-cookies filename; then you use --load-cookies filename to get your cookies back on subsequent requests. Something like the following (note I have not tested this from Matlab, so quoting, etc, might not be exactly right, but I do use a similar shell construction in other contexts):
command_init = ['wget --no-check-certificate \
--page-requisites \
--keep-session-cookies \
--save-cookies cookie_file.txt \
--post-data \'user=X&pass=Y&whatever=TRUE\'' \
init_url];
command_get = ['wget --no-check-certificate \
--page-requisites \
--load-cookies cookie_file.txt' \
url];
If you don't have any post-data, but rather subsequent gets will update cookies, you can simply use keep and save on successive get requests.