The old version of the LUIS console used to have an "import utterances" function. The new console does not seem to have this function anymore.
In addition, the API (I think) used to have this option, but now it does not seem to be there.
Am I missing something, or is this a feature that is being added back-in at a later stage?
In addition, just querying the published endpoint doesn't seem to reliably 'import' the utterances into a pool. We can assume they have been imported and will eventually surface via the 'suggested' tab, but this really isn't good enough. We need to be able to import utterances in batch and then label them.
The LUIS V2 API currently has an API for batch importing JSON representing the utterance and labels within the utterance which is documented here.
I'm taking it however that you're looking for a way to just upload a list of undecorated utterances and then label them in the UI?
Regarding the comment about querying the endpoint; LUIS will only show suggested utterances that it feels will help improve the model if you label them. If you label all utterances that come into the system you risk over-fitting the model to common queries.
Related
I have an Anylogic model to test. There are some problems about the execution of the model. I want to find the person who made some parts of the model to get a right solution from him/her. Is there any way to find, which user made which part? I couldnt find any relative part in the program.
Best Regards,
what do you mean? You have a model that was done by a team and you want to know who did what code in the model so you go and speak to them directly ? Or you have a resource in your model that works a part (agent) and you want to have an assessment on what did each resource work on?
for the first option:
you can't unless everybody puts their names in the code and descriptions indicating what they changed or if you use a version control tool such as GIT
For the second option:
your question is a bit too broad... but what you can do to verify your model is to use the execution log, which will tell you what resource worked on what parts and for how long.
you turn your execution logs clickin in the database in your projects tab, in the database properties in the log section."Log model execution"
After that you want to check the resource_unit_states_log, which has all the information on what each one of your resources did in details, which answers your question.
There are other ways, but this will be the answer :)
We are using Dynatrace in our organization for a long time. It is really a great tool for pointing out pain areas in case of performance and knowing what's happening in the system. However we found that reporting is not great. In our setup, data gets wiped-off in 20 days for non-PRD environment. It also looses all the details. To keep a track of underlying calls currently we need to take a screen shot of put the data manually in excel file. This helps us in comparing old results with latest development/ improvement.
We were wondering if there is any Dynatrace API available which can push the pure path information in JSON format. We checked Dynatrace API page. But there is none. Creating excel files manually is waste of time. There is no value addition. Has anyone else found any work around for this?
e.g. for the image we want to have JSON having list of underlying DB calls shown under controller, their start-end time, time consumed details, etc..
Please help
I'm very interested in the emerging trend of comments-per-paragraph systems (also called "annotations systems"), such as the ones implemented by medium.com and qz.com and i'm looking at the idea of developing one for my own.
Question: it seems they are mainly implemented via javascript, that runs through the text's html paragraphs uniquely identified by an id attribute (or, in the case of Medium, a name attribute). Does it mean their CMS actually store each paragraph as a separate entry in the database? Seems overly complex to me, but otherwise, how do they manage the fact that a paragraph can be deleted, edited or moved around in the overall text? How would the unique id be preserved if the author changes the paragraph?
How is that unique id logically structured? (post_id + position_in_post)?
Thank you for your insights...
I can't speak to the medium side, but as one of the developers for Quartz, I can give insight into how qz.com annotations work.
The annotations code is custom php code and is independent of the CMS for publishing articles (wordpress VIP). We do indeed store a reference to each paragraph as a row in the database, in order to track any updates to the article content. We call this an annotation thread and when a user saves an annotation the threadId gets stored along with the annotation.
We do not have a unique id stored on the wordpress side for each paragraph, instead we store the paragraphs relative position in that article (nodeIndex ā3" and nodeSelector āpā == the third p-tag in the content body for a given article) and the javascript determines where exactly to place the annotation block. We went this route to avoid heavier customizations on the wordpress side, though depending on your CMS it may be easier to address this directly in the CMS code and add unique ids in the html before sending to the client.
Every time an update to an article is published, each paragraph in the updated article is compared against what was previously stored with the annotation threads for that article. If the position and paragraph text do not match up, it attempts to find the paragraph that is the closest match and update the row for that thread and new threads are created and deleted where appropriate. All of this is handled server side whenever changes are published to an article.
A couple of alternate implications that are also worth looking at are Gawker's Kinja text annotations (currently in use on Jalopnik) and the word-for-word annotations of rapgenius.com.
(disclaimer: I'm a factlink dev.)
I work for a company trying to allow per-paragraph (or per-phrase) commenting on arbitrary sites. Essentially, you've got two choices to identify the anchor of a comment.
Remember the structure of the page (e.g. some path from a root to a paragraph), and place comments at the same position next time.
Identify the content of the paragraph and place comments near identical or similar content next time.
Both systems have their downsides, but you pretty much need to go with option 2 if you want a robust system. Structural identification is fragile in the face of changing structure. Especially irrelevant changes such theming or the precise html tags used can significantly impact the "path". When that happens, you really can't fix it - unless you inspect the content, i.e. option (2).
Sam describes what comes down to a server-side content-based in his answer. Purely client-side content-based matching is what factlink and (IIRC) hypothesis use. Most browsers support non-standard but fast substring search in page content using either window.find or TextRange.findText. Alternatively, you could walk the DOM, which is slower but gives you the flexibility to implement (e.g.) fuzzy matching.
It may seem like client-side matching is overkill or complex, but really, it's simpler: it's a very robust way to decouple your content-management from your commenting. Neither is really simple, so decoupling those concerns can be a win.
I had created a fiddle on the same lines to demonstrate power of JQuery during a training session.
http://fiddle.jshell.net/fotuzlab/Lwhu5/
Might help as a starting point along with Sam's detailed and useful insights. You get the value of textfield in Jquery function where you can send it across to your CMS using ajax/APIs.
PS: The function is not production ready. Its only meant as a starting point. A little tweaking will make it usable.
I've recently published a post on how to do this with WordPress building on an existing plugin.
Like qz.com, I assign paragraph ids on the client and then provide that info to WordPress to store as comment meta when a new comment is created. I used hashing of the paragraph text to create the id which means that the order of paragraphs is unimportant but does mean that if a paragraph is edited then any associated comments become orphaned.
At first I thought this was an issue but thinking about it, if a reader comments on a paragraph then editing that text subsequently seems a little sneaky.
The code is freely available on GitHub if you feel like forking it and enhancing it.
There is one other wordpress plugin called "commentpress" which exist since a long time.
I use an old version of this plugin for my blog and it's work very well.
You can choose to comment per lines or per paragraphs, and ergonomics is really thinking!
A demo here:
http://futureofthebook.org/
and all the code is on github:
https://github.com/IFBook/commentpress-core
After a quick look on the code, it seems they use the second approch like #Eamon Nerbonne explains on his answer.
They parse each paragraphs to make a signature based on the first char of each words. Here is the function to do that.
In case someone comes looking in here, I've implemented a medium like functionality as a Django app.
It is open source and can be found as package on Pypi, and on github.
I used one of my other apps, blogging to allocate unique Paragraph IDs to each content object (currently we're only looking at <p> tags) and puts uses some extra internal meta data in the backend while storing it in DB (MySQL currently, but what we've done is JSONed the Blob, this method is more natively suited for a document oriented DBs). The frontend is mainly jQuery driven with REST API plugging the backend with the frontend.
I took cues from this post, but then rejected the creation of some kind of digest value from paragraph because content can change. What I wanted was to preserve the annotations as long as the paragraph was not completely over-written. In the complete over-write case, I provided for collection of the annotations in an orphaned bucket.
More in these tutorials
A legacy version of the same is running on those tutorials pages, that was the first revision. (But you won't be able to post without logging in, but you could always login using social accounts to check it out :-) )
Basically I am aiming to make an easier interface for managing time based triggers that control Google Scripts So far I have not found anything that would allow me to get details on the current triggers, specifically trigger times. There seem to only be a handful of methods supporting trigger management currently.
Documentation:
Class Trigger
Thanks for any suggestions or pointers!
The Apps Script documentation also contains Managing Triggers Programmatically, which is a very promising title.
However, this sums it up:
The only way to programmatically modify an existing trigger is to delete it and then create a new one.
Sometimes we find undocumented APIs by playing with auto-completion, but there don't seem to be any for Triggers at this time.
I tried this little gem, hoping to get logs full of trigger information:
var triggers = ScriptApp.getProjectTriggers();
for (var i in triggers) {
Logger.log(Utilities.jsonStringify(triggers[i]));
}
All I got was "Trigger" repeated ad nauseum. So no help there, the Trigger object is locked down.
I'm surprised that there's no request in the Issue Tracker against the Triggers Component to add APIs to get this additional information, or to support more programmability. You should add one, and report it here so others can star it (cast their vote).
Per #Mogsdad 's suggestion I have submitted a feature request via the Issue Tracker
. Please go and cast your vote for this issue and add any other suggestions for the request. The issue is fairly broad and there may be a lot of detailed information that could be useful if the right API is created. So please add comments in the issue tracker if there are parts you feel need to be addressed in an expansion of the current Trigger API.
Our team captures user stories in TFS. I use the great tool TeamSpec to dump these into a word document for good 'ol fashioned easy reading.
Now, we are at the point where we need to produce a functional specification that describes the software that will be built to support those user stories.
Again, I'd probably like this functional spec in word ultimately - as this has to be a readable document that the customer can read and sign off on.
That said, I would really love to have a tool that helps me map user stories to functional requirements, possibly even generating the matrix (in both directions) for easy reference.
What tools are there that might help me? Googling is no help at all. :)
Thanks in advance!
Here is what we have decided to do.
There will be a High Level Design, and a Functional Specification document.
The customer will get both, but the HDD will be written up front, for sign off, agreement and limiting of scope.
The functional specification will also be signed off on, but incrementally - we will fill in a section as we decide what will be done, and get sign off.
The HDD will have User stories in it, each user story is a ticket in TFS, synched via teamspec.
The Func Spec will have tasks in it, which are user stories broken down into features/requirments. Teamspec again used to populate this stuff in the func spec.
So the only thing left is showing a matrix - I will use the ability of TFS to pop out a .xls file - if it shows the user story -> task breakdown, that gives me the matrix we need.
So that's it really.
It solves the issue of HDD, Func spec, mapping of requirements and items between the two, while still recording it in TFS for access via developers.