How to specify --downloads option? - wp-cli

I am trying to figure out how to create downloadable woocommerce products. Trouble is syntax for the --downloads argument for $ wp wc product create.
It appears that the downloads argument should be one ore more objets having "id", "name", "file" specifications. The toughest one to figure is "id". I tried using the specs for a download file associated with another product. Still get an [] empty value for that new product when I ask:
$ wp wc product list --fields=id,name,downloads
$ wp wc product create --name="CLI Test Downloads" --type=simple --regular_price=20 --downloadable=true --downloads=[{"id":"2d40862d-0044-4da6-bd87-0e94bf5531d6","name":"e-SIGNES-53no2.pdf","file":"https:\/\/ventardlab.info\/wp-content\/uploads\/2019\/01\/e-53no2.pdf"}]
I get no error message, just that product is created. But when I check the new product in the WordPress dashboard, there are no download file included.

I found the answer by myself after browsing other posts under wp-cli tag of Stack Overflow.
Solution is quite simple: enclose the --downloads value within single quotes.
For example --downloads=' [ { .... } ] ' with the appropriate syntax inside the wp wc object. Voilà!

Related

How to set the date when linking an attachment to a work item using Azure Devops REST API?

My team is moving to Azure Devops and I'm writing a custom tool to migrate all of the historical and current issues from our existing system, using the REST APIs of both products.
For this to be useful, it needs to be able to copy across all of the crucial data, including comments and attachments, with their correct dates and including inline links to images and other files.
Cloning a single work item at a time, I have currently got to the point where I have successfully:
1) Created the initial work item with basic info (description, created date, etc.)
2) Added all of the attachments that exist on the work item to Azure Devops
3) Linked the attachments to the work item
I now need to:
Add comments, and where necessary link to the already added attachments in the comment body.
The problem is with the dates of revisions that seem to get set by default when performing step 3 above.
If I try to add the comments after adding the attachments, because I am setting their createdDate in the past, I get an error telling me Dates must be increasing with each revision, which makes sense (somewhat) as adding each comment effectively creates a new revision of the work item.
I can add the comments before the attachments (which works fine as long as they are added chronologically), but at that point I obviously don't know the attachment ID as it hasn't been added yet.
I have tried passing an additional patch in the body of the request to link the attachment (as this works for the comments), but it seems to be ignored (just stabbing in the dark at this point as the docs are generally terrible).
I'll put the payload here as it illustrates the type of thing I want to do:
[
// this adds the actual file ✔️
{
"op":"add",
"path":"/relations/-",
"value": {
"rel":"AttachedFile",
"url":"https://dev.azure.com/FPipe/3e86b8a8-8108-4a35-86f3-4fb9ea599561/_apis/wit/attachments/1e1536de-dced-4455-a9ce-48e5a0e85ece?fileName=Drew%20Spencer.png",
"attributes": {
"comment":"Added from YouTrack"
}
}
},
// want to do something like this
{
"op":"add",
"path":"/fields/System.ChangedDate",
"value":"2021-05-16T16:54:37.076+01:00"
}
]
Is this possible?
If not does anybody know any decent workarounds?
I can think of two possible solutions:
Add the comments, then add and link the attachments, then update the comments with the correct links. Not great as it would require a second pass, and I'm not even sure it would work as the same issue could occur.
Just include the date from the old system in the comment body. If it comes to this, I'll do it, but I'd rather use the native features where possible.
There is a fragment on the sample in the docs with this patch, but there is literally 0 explanation about what it does:
{
"op": "test",
"path": "/rev",
"value": 3
},
I suspect something to do with revisions! The docs are genuinely terrible:
Source: https://learn.microsoft.com/en-us/rest/api/azure/devops/wit/work-items/update?view=azure-devops-rest-6.0&tabs=HTTP#operation
Thanks for your assistance.

Defining a new variable in order to make a huge iteration giving me an error

I have an endpoint, you can have información about products
{{URL_API}}/products/
If i perform a GET method over that endpoint i will obtain the information of every product
BUT i can also specify the product that i want to know about, i.e:
{{URL_API}}/products/9345TERFER (the last code is the id of the product, called SKU)
The problem is that if i want to make a CSV in order to update the information of different products i have to define a variable called sku in the endpoint so i will be able to pass the corresponding SKU
I want to create the variable {{sku}} but i do not understand how to do that.. i tried so many times and i failed, i've searched a lot but i do not really understand
Also, should i use ":" before the declaration of the variable? i mean:
{{URL_API}}/products/:{{sku}}
or simply:
{{URL_API}}/ns/products/{{sku}}
Can you help me?
I'm super lost :(
EDIT:
I want to perform a PUT method, i want to pass different values to the body and then.. send the request (it throws an error: 404 not found)
This is what i did:
PUT|{{URL_API}}/products/{{sku}}
body:
{
"tax_percentage":"{{tax_percentage}}",
"store_code":"{{store_code}}",
"markup_top":"{{markup_top}}",
"status":"{{status}}",
"group_prices": [
{
"group":"{{class_a}}",
"price":"{{price_a}}",
"website":"{{website_a}}"
}
]
}
CSV:
POSTMAN:
Your issue seems to be just a basic understanding of how data files work with variables in Postman, here's a simple example that will work the same way for you too.
This is a basic request I'm using to resolve the variable from the data file - It's a GET request but that doesn't matter as all we're look at here is using a data file to resolve variables. All you need to do is ensure the URL is correct and that you SAVE the request before using the runner.
Here's a simple CSV file created in a text editor. The heading sku in the name on the variable it will reference inside the Postman request. Each value under that is the value that will be used for each iteration.
In the Runner, select your Collection from the list (If you have more than one) then select the CSV file. Once imported, you will be able to see a preview of the data.
If that's correct, press the Run button. The Runner will then iterate through the file and pick up the sku value in the CSV file and use it in the request. I've expanded one of the requests so you can see that the value was used in the request.

Get a link to a specific line in a diff using GitHub API?

Using the GitHub API I'm looking for a way to generate a link to a specific line in a diff.
I can already contruct a "compare between commits" url, for example:
https://github.com/emmetog/feature-flags/compare/master...d8f9c29bfd0b87d26123b78b76feca8e4c87ad8
And visiting that url in a browser I can click on a specific line and I get this:
https://github.com/emmetog/feature-flags/compare/master...d8f9c29bfd0b87d26123b78b76feca8e4c87ad8#diff-21171d4ef87ca8e3591556dd18dfa456R26
However, I need to generate that last bit, the #diff-21171d4ef87ca8e3591556dd18dfa456R26 bit, programatically throught the github api, or else find another way of linking to the specific line in the diff without going through the browser.
Is this possible?
It is impossible.
I read https://developer.github.com/v3/repos/commits/#compare-two-commits
I tried
curl https://github.com/emmetog/feature-flags/compare/master...d8f9c29bfd0b87d26123b78b76feca8e4c87ad8
By using GitHub API, we can not specify what is the line 26th of different between new version and old version of file src/Emmetog/FeatureFlag/Entity/FeatureFlag.php
Because difference of 2 revisions doesn't happen at line 26, it is impossible for comparing. Or file src/Emmetog/FeatureFlag/Entity/FeatureFlag.php has only 10 lines of code, it is impossible for comparing.
In HTML webpage, id = diff-21171d4ef87ca8e3591556dd18dfa456R26 is auto-generated id. We can not specify intentional way before executing GitHub API request.
This may not be the best way to do it, but it looks like you can do some webscraping.
For example. In the link you provided. That line contains this element:
<td id="diff-21171d4ef87ca8e3591556dd18dfa456R26"
data-line-number="26" class="blob-num blob-num-addition
js-linkable-line-number selected-line"></td>
Which contains the diff hash. You also have the line number (26). Now you just need the 'R' between the diff hash and the line number. That, I believe, is given by whether the line has been added or removed. You can get that from the css class 'blob-num-addition'. It looks like 'blob-num-addition' corresponds to 'R' and 'blob-num-addition' corresponds to 'L'

Get NodeRef of a workflow task Alfresco

I create a workflow, and when I go to the task-edit page:
I'm trying to obtain the nodeRef of the file (latexexemplo-2.pdf) of the workflow task:
http://localhost:8080/share/page/task-edit?taskId=activiti$20649
I'm trying to make this way:
var taskId = args.taskId
var task = workflow.getTaskById(taskId);
nodeRef = task.getPackageResources()[0].nodeRef;
But I obtain "args is not defined" ... "workflow is not defined" ... "task is not defined".
How can I get the nodeRef with another way?
Unfortunately, you cannot access in the browser information that is in the repository.
A quick and dirty solution is to use directly the information that is already in the page.
I have started a workflow and opened the task page as you did.
Using the browser debug tool, I have inspected the html.
As you can see in the image attached below, Alfresco stores the documents attached to the task in an hidden input. You could use YAHOO to get it.
Search for an element with the id "page_x002e_data-form_x002e_task-edit_x0023_default_assoc_packageItems".
If there is more than one document associated, the value will be a comma separated list of noderefs. I am getting the first element. This of course works, as is, only if there is one and only one document associated. You should probably take into account also the case when no document is associated or there is more than one.
var nodeRef = YAHOO.util.Selector.query("#page_x002e_data-form_x002e_task-edit_x0023_default_assoc_packageItems")[0].value;
You can get all the current task details which are assigned to you by using
Workflow API in Freemarker.
So you can get the task id or noderef of tasks.

Perl: How to retrieve album metadata from MusicBrainz?

I am creating a Perl script which will move a mp3 file to my music folder in format artist/album/mp3file. Now it is possible that some of my mp3 files don't have an album tag so I thought of querying the MusicBrainz database to retrieve album metadata given track title & artist.
I am using WebService::MusicBrainz Perl module for this task, but I am not able to see any method that gives album metadata info. My current code is:
use WebService::MusicBrainz::Track;
my $ws = WebService::MusicBrainz::Track->new();
my $response = $ws->search({ ARTIST => 'Ryan Adams', TITLE => 'when the stars go blue' });
my $track = $response->track();
print $track->title(), " - ", $track->artist()->name(), "\n";
say $track->id();
So, how do I get my the album info for a given track using MusicBrainz and if it is not possible what are my alternative options?
First of all, what you want is adding metadata to mp3s which is the most common usage scenario people have. The "normal" way is to use a Musicbrainz Tagger, open these files there and work with the interface to attach the correct metadata.
The suggested (gui) tool is Musicbrain Picard
I also want to state that the Perl module is using the now deprecated Web Service Version 1 of MusicBrainz.
That Web Service has a couple of problems because it was made for another database scheme than the one used now at MusicBrainz.
However, the current Web Service Version 2 has only a python library available: python-musicbrainzngs.
You can still work with the Perl module, but if you run into "weird" problems, this might be the reason.
This is how the Web Service works in general (and how it should apply directly for the Perl module as a wrapper for this web service):
Your search gives this:
http://musicbrainz.org/ws/1/track/?artist=%22Ryan%20Adams%22&title=%22when%20the%20stars%20go%20blue%22
There you get a list of recordings of this track. These recodings occur on multiple releases (ReleaseList).
You can disregard many of these, as they are of the type "compilation". You probably want the "album" releases.
You probably ask yourself why there are multiple album releases with the same name in the list.
This is because a "release" on MusicBrainz is a combination of a release-event and a couple of mediums.
You might have an US release and a german deluxe edition and so on.
All of these releases are in one "release group".
You probably want the name of this "release group", which mostly is also the name of every release in this group.
You might want to read a bit on how the MusicBrainz Database is structured.
This is only the basic use case of course.
You might run into misspellings in artist/title, multiple or missing album release groups and other things.
However, altogether it should work and you can just drop the "problem" cases in a special directory and work with them in Picard.
Picard also has other means of identifying files per "musical analysis" (PUIDs, Acoustids)
EDIT:
my #tracklist = $response->track_list();
foreach my $track ( #tracklist ) {
print $track->title(), " - ", $track->artist()->name(), "\n";
my #releaselist = $track->release_list();
foreach my $release ( #releaselist ) {
print " ", $release->title(), " - ", $release->type();
}
}
Should work in general, but it doesn't. It gives you all tracks of the response, but somehow it can't extract releases from release_list(). Possibly because the schema changed or because the perl module is broken.
Check out our perl modules for accessing the Cover Art Archive:
http://metacpan.org/pod/Net::CoverArtArchive
More info on our archive is here, including specs:
http://coverartarchive.org/
Good luck!