Multiple schema.org HowTo instructions on one page - schema.org

I have a problem with HowTo objects in schema.org data format (JSON-LD to be precise). I was trying to google something, but HowTo name is rather problematic.
I have two instructions on the same webpage. I'd like to add two HowTo objects so that they look pretty in google. I'm using JSON-LD to put the objects on the page. I was trying to simply put two HowTo objects:
{
"#context": "http://schema.org",
"#graph": [
{
"#type": "HowTo",
...
},
{
"#type": "HowTo",
...
}
]
}
but Google's Rich Result Test (https://search.google.com/test/rich-results) says that I can put only one HowTo object on a page. I'm not surprised as that's what the google documentation says: https://developers.google.com/search/docs/data-types/how-to#how-to (if the link doesn't lead to the correct section: Contents -> Structured data type definitions -> HowTo)
You can check it for yourself and experiment with an example HTML I've made using two examples from google documentation (I'm not putting it directly on Stack as it's a little long): https://gist.github.com/Hoxmot/2eec6a39bcd9fe2b46a8eaadea9afe27
The problem is that it also says that: https://developers.google.com/search/docs/data-types/how-to#how-to-section (Contents -> Structured data type definitions -> HowToSection)
For listing multiple ways to complete a task, use multiple HowTo objects.
Is there a way to have two (or more) HowTo instructions on one page? I was also trying to use multiple HowToSections but that doesn't work as desired - it's listed as one solution.

Related

How to set the date when linking an attachment to a work item using Azure Devops REST API?

My team is moving to Azure Devops and I'm writing a custom tool to migrate all of the historical and current issues from our existing system, using the REST APIs of both products.
For this to be useful, it needs to be able to copy across all of the crucial data, including comments and attachments, with their correct dates and including inline links to images and other files.
Cloning a single work item at a time, I have currently got to the point where I have successfully:
1) Created the initial work item with basic info (description, created date, etc.)
2) Added all of the attachments that exist on the work item to Azure Devops
3) Linked the attachments to the work item
I now need to:
Add comments, and where necessary link to the already added attachments in the comment body.
The problem is with the dates of revisions that seem to get set by default when performing step 3 above.
If I try to add the comments after adding the attachments, because I am setting their createdDate in the past, I get an error telling me Dates must be increasing with each revision, which makes sense (somewhat) as adding each comment effectively creates a new revision of the work item.
I can add the comments before the attachments (which works fine as long as they are added chronologically), but at that point I obviously don't know the attachment ID as it hasn't been added yet.
I have tried passing an additional patch in the body of the request to link the attachment (as this works for the comments), but it seems to be ignored (just stabbing in the dark at this point as the docs are generally terrible).
I'll put the payload here as it illustrates the type of thing I want to do:
[
// this adds the actual file ✔️
{
"op":"add",
"path":"/relations/-",
"value": {
"rel":"AttachedFile",
"url":"https://dev.azure.com/FPipe/3e86b8a8-8108-4a35-86f3-4fb9ea599561/_apis/wit/attachments/1e1536de-dced-4455-a9ce-48e5a0e85ece?fileName=Drew%20Spencer.png",
"attributes": {
"comment":"Added from YouTrack"
}
}
},
// want to do something like this
{
"op":"add",
"path":"/fields/System.ChangedDate",
"value":"2021-05-16T16:54:37.076+01:00"
}
]
Is this possible?
If not does anybody know any decent workarounds?
I can think of two possible solutions:
Add the comments, then add and link the attachments, then update the comments with the correct links. Not great as it would require a second pass, and I'm not even sure it would work as the same issue could occur.
Just include the date from the old system in the comment body. If it comes to this, I'll do it, but I'd rather use the native features where possible.
There is a fragment on the sample in the docs with this patch, but there is literally 0 explanation about what it does:
{
"op": "test",
"path": "/rev",
"value": 3
},
I suspect something to do with revisions! The docs are genuinely terrible:
Source: https://learn.microsoft.com/en-us/rest/api/azure/devops/wit/work-items/update?view=azure-devops-rest-6.0&tabs=HTTP#operation
Thanks for your assistance.

Add an Intro page to exam in R/exams

I am using R/exams to generate Moodle exams (Thanks Achim and team). I would like to make an introductory page to set the scenario for the exam. Is there a way to do it? (Now, I am generating a schoice with answerlist blank.)
Thanks!
João Marôco
Usually, I wouldn't do this "inside" the exam but "outside". In Moodle you can include a "Description" in the "General Settings" when editing the quiz. This is where I would put all the general information so that students read this before starting with the actual questions.
If you want to include R-generated content (R output, graphics, data, ...) in this description I would usually include this in "Question 1" rather than as a "Question 0" without any actual questions.
The "description" question type could be used for the latter, though. However, it is currently not supported in exams2moodle() (I'll put it on the wishlist). You could manually work around this in the following steps:
Create a string question with the desired content and set the associated expoints to 0.
Generate the Moodle XML output as usual with exams2moodle().
Open the XML file in a text editor or simply within RStudio and replace <question type="shortanswer"> with <question type="description"> for the relevant questions.
In the XML file omit the <answer>...</answer> for the relevant questions.
Caveat: As you are aware it is technically possible to share the same data across subsequent exercises within the same exam. If .Rnw exercises are used, all variables from the exercises are created in the global environment (.GlobalEnv) and can be easily accessed anyway. If .Rmd exercises are used, it is necessary to set the envir argument to a dedicated shared environment (e.g., .GlobalEnv or a new.env()) in exams2moodle(..., envir = ...). However, if this is done then no random exercises must be drawn in Moodle because this would break up the connections between the exercises (i.e., the first replication in Question 1 is not necessarily followed by by the first replication in Question 2). Instead you have to put together tests with a fixed selection of exercises (i.e., always the first replication for all questions or the second replication for all questions, ...).

Make tile-ID request URL work with mapbox-style "satellite-streets" using folium

I use Python for plotting geospatial data on maps.
For certain map-styles, such as ["basic", "streets", "outdoors", "light", "dark", "satellite", "satellite-streets"], I need a mapbox-access token and for some geospatial plotting packages like folium I even need to create my own link for retrieving the map-tiles.
So far, it worked great with the style "satellite":
mapbox_style = "satellite"
mapbox_access_token = "....blabla"
request_link = f"https://api.mapbox.com/v4/mapbox.{mapbox_style}/{{z}}/{{x}}/{{y}}#2x.jpg90?access_token={mapbox_access_token}"
However, when choosing "satellite-streets" as mapbox-tile-ID, the output doesn't show a background map anymore. It fails with inserting "satellite-streets", "satellitestreets" and "satellite_streets" into the aforementioned link-string.
Why is that and how can I come to know what's the correct tile-ID-name for "satellite-streets"?
I found an answer when reaching out to the customer support.
Apparently, one has to access the static APIs which have specific names listed on their website:
"In general, the styles that you mentioned including
"satellite_streets" that you are referencing are our classic styles
that are going to be deprecated starting June 1st. I would recommend
using our modern static API the equivalent modern styles. This
will allow you to see the most updated street data as well.
Like the example request below:
https://api.mapbox.com/styles/v1/mapbox/satellite-streets-v11/tiles/1/1/0?access_token={your_token}
Here is more info on the deprecation of the classic styles and
the migration guide for them."
My personal adaptation after having tried everything out myself, is:
Via combining the above-mentioned with the details on how to construct a Mapbox-request link on this documention from mapbox' website,
I finally managed to make it work.
An example request looks like so (in python using f-strings):
mapbox_tile_URL = f"https://api.mapbox.com/styles/v1/mapbox/{tileset_ID_str}/tiles/{tilesize_pixels}/{{z}}/{{x}}/{{y}}#2x?access_token={mapbox_access_token}"
The tileset_ID_str could be e.g. "satellite-streets-v11" which can be seen at the following link containing valid static maps.

How to create a button based chatbot

I am currently working on a project that involves creating a "chatbot". It's not going to be any kind of "AI", deep learning or anything fancy like that. It's going to be a "menu/button based chatbot" (As they call it around on the web). I have no idea how to tackle this kind of functionality. It's going to be inside an app, I will be using Ionic and the "database" will be stored on firebase as a JSON (however I am open to use something else if its easier).
When communicating with the chatbot, the user will only be able to use closed answer, mainly 1,2,3 or 4 responses. Each responses will lead to the next question and so on.
We then have to create a structure of all the different possibilities.
Let's say the chatbot starts by asking "What do you want to eat for dinner ? " and the user has 2 choices: pasta, pizza. Then, depending of the user's anwser, we then display the next question. So the user has a very limited range of answer but we need to catter for every path.
What I am thinking so far is having a JSON config with blocks like this:
{
address: 0001,
type: 1, // The type will probably help to identify the kind of block
question: 'What do you want to eat for dinner?',
responses : [{
title: "pasta",
link: 0002,
}, {
title: "pizza",
link: 0003,
}
},
{
address: 0002,
type: 1,
question: 'Great you want to eat some pasta, what else?',
responses : [{
title: "Cheese",
link: 0004,
}, {
title: "Cake",
link: 0005,
}
}, etc.
So when the user clicks on "pasta", I should display the next block (which is the one with the address 0002). I could have different block types, ones that will display text question, others that will just display a video in the chat or any other kind. (so blocks might be more complex with video url, images etc.)
I am thinking of creating a very basic tool that will helps to create all the different blocks that will then generate the massive JSON config.
But this have two downsides :
-> I need to define one block for each interactions. (This will lead in having hundreds of blocks if the chatbot becomes big)
-> Let's say I want to offer something a bit more personalised and I need to use some data stored outside of the chatbot (on the user profile for example).
Let's say the user has specified if he is or is not allergic to cheese.
pizza->cheese (he is allergic)-> go to 'you should avoid cheese'
pizza->cheese (he is not allergic)-> go to 'great, what do you want for dessert?'
But in my model, cheese always go to address 0004, so this is not going to work. I need the block to have some "rules" about where to go next depending of some variables, but this seems to be tricky...
I am open to use any kind of API, I've seen tons but not something I can easily integrate in Ionic. I want to have some control on the design and I would like to avoid being dependent of an external solution, but still I am curious if anything can fit my needs.
I would take a look at Watson Assistant and look at the different kinds of responses you can implement.
The above image is what adding an option response would look like similar to the example JSON you posted.
It might be too much in some cases, but having a framework to handle some of the dialog node traversals is quite handy.

IBM Chatbot Assistant - Array with same values

I have this piece of code in JSON editor of Watson:
"context": {
"array": "<? entities['spare_part'].![literal] ?>",
"array_size": "<?$array.size() ?>"
When the input of the user, for example, is "Hello, I need a valve, and the part number of the valve is 1234", the size of the array ends up being 2 since the user mentions the word "valve" twice in his input. Some nodes are being executed depending on size of the array. For instance if the size of the array is 1, some nodes will be ignored because they are only executed only if the size of the array is 2.
I want the array to store only the inputs with different values, which is basically I don`t want the array to store the values of the same type, in my case 2 valves. If it is possible somehow please show me a way.
All that can be done, but the best approach depends on the overall system architecture. Remember that Watson Assistant is a conversation service, not a data processing platform...
You can process JSON in either Watson Assistant directly using the built-in methods and SpEL, see these links to get started:
- https://console.bluemix.net/docs/services/conversation/expression-language.html#expressions-for-accessing-objects
- http://docs.spring.io/spring/docs/current/spring-framework-reference/html/expressions.html
- https://console.bluemix.net/docs/services/conversation/dialog-methods.html#expression-language-methods
That would require some coding within the dialog nodes. It could be ok. What I would recommend is to either process in your app that drives the dialog (you need that app anyway) or to code up small server actions to transform data.
If it is the word you are looking for, you can use contextual entities to train for this.
As an example I created the following intent (along with general intents from catalog).
For each example I highlighted the "valve" word that is the one I am interested in, and added to the entity.
Now when I test it I get the following.
All this was done in a couple of minutes. For production level you may want to add more examples, or think about how you want to annotate.