I have this workflow with ParaView where I
1) load a dicom,
2) load a state where I perform actions on the dicom (sources, filters, custom filters, etc).
3) apply a custom macro where I initialize everything.
Then I have some custom macros that moves lines in the final result.
I would like to make a desktop app with a simple UI where a I have one button that executes the 3 necessary steps to initialize everything. And then I have three buttons that executes the custom macros.
So I'm basically making a simpler ParaView.
I have used the trace function that ParaView has to make a python script with all the steps and then executing them in the pvpython shell to test if I reach the same result that in the ParaView GUI.
But a simple script that just load the dicom makes the windows (Visualization Toolkit - Win320OpenGL) not responding.
What do you think is the best approach to do this?
This is fully supported by ParaView, as ParaView is not only an application but also a framework.
This is documented here : https://www.paraview.org/Wiki/Writing_Custom_Applications
Examples are in ParaView code : https://gitlab.kitware.com/paraview/paraview/tree/master/Examples/CustomApplications
For follow-up questions, I would suggest asking on the ParaView Discourse : https://discourse.paraview.org/
Related
I am trying to build a video player using flutter for Desktop. There is a video_player plugin available for iOS and Android, but not for Desktop. So, for the time being thought of trying to use gstreamer for decoding and hardware rendering in C++ code as back-end to flutter. The idea is to pass the Window Id of the flutter window to gstreamer's glimagesink plugin for rendering the video.
I am using the latest code from https://github.com/google/flutter-desktop-embedding as the base for my experiments. Below mentioned points are with reference to this repo.
In file flutter-desktop-embedding/example/linux/main.cc, FlutterWindowController object is created as shown below.
flutter::FlutterWindowController flutter_controller(icu_data_path);
This internally calls
FlutterDesktopInit();
While hovering the mouse pointer on the above method, VS code shows the following
bool FlutterDesktopInit()
Sets up the library's graphic context. Must be called before any other
methods.
Note: Internally, this library uses GLFW, which does not support multiple
copies within the same process. Internally this calls glfwInit, which will
fail if you have called glfwInit elsewhere in the process.
It is clear that FlutterDesktopInit() uses GLFW to create window. Checked whether I can get the window handle. But, no luck. I could only get the FlutterWindow object as shown below.
flutter::FlutterWindow *win = flutter_controller.window();
Appreciate if somebody can give some hint on how to get the GLFW window handle, which can be used with glimagesink.
You can't get references to any GLFW objects through that API. This is by design because, as the comment you quoted says, you can't have multiple copies of GLFW within the same process. GLFW is statically linked into the Linux Flutter embedding, so you can't use GLFW in the runner or a plugin.
Implementing a video player should be done using the texture APIs, which will be added for GLFW in this PR.
I would like to know if it is possible to make a simple API call (e.g. GitHub API v3) within the context of a DocFx custom template preprocessor. I have been trying all sorts of different approaches, but nothing has fully worked so far.
My goal is to make a call to an API to retrieve some data, and then update the model accordingly to be used in the *.liquid or *.tmpl renderers.
I have tried using the http/https node modules. I have also tried using node-fetch. It results in a docfx build error something like:
Error:Error transforming model ".../index.raw.json" generated from
".../index.md" using "conceptual.html.primary.js". Error running
Transform function inside template preprocessor
According to DocFx documentation, preprocessors follow the ES 5.1 standard. My code conforms to this.
Does anyone know if this is possible?
By the way, I am able to do simple model manipulation just fine, so I understand the basic concepts here with the DocFx preprocessors.
Thanks!
For the benefit of others, I discovered DocFX uses jint which cannot require a Node library directly. Therefore, it appears the plugin route is a better way to go for this use case.
I'm using VS 2013 with CodedUI to automate UI tests on an application that is not built by my client (it's an implementation project). When inspecting the UI Control using inspect or coded UI, I see that the Automation ID keeps changing and I have no real way (beside position based) to capture my controls (the application is developed in Delphi).
So I'm wondering if there exist some library or add-ons (or something not even related to Coded UI and VS) that can help with this? For example some tools that can capture a screen shot of the control and then map it (the screenshot) to an Control Id that I will define and use that to automate?
Wow....I was able to find a way to do what I need using sikuli (http://www.sikuli.org/) checkout this post. Ill actually try it out tomorrow. But I found on the web (link below) that it`s possible.
From Coded UI we can call the sikuli script like that:
Process process = new Process();
ProcessStartInfo startInfo = new System.Diagnostics.ProcessStartInfo();
startInfo.FileName = #"D:\Sikuli\ds.bat";
process.StartInfo = startInfo;
process.Start();
process.WaitForExit();
(code from) https://answers.launchpad.net/sikuli/+question/232233 , read this post guys!
I have a single-page web app that presents a multi-step photo management "wizard", split up across several discrete steps (photo upload, styling, annotation, publishing) via a tab strip. On switching steps I set the URL hash to #publishing-step (or whichever step was activated).
How do I set up Optimizely tests to run on the various discrete steps of the wizard?
The browser never leaves the page, so it only gets a single window.load event. Its DOM isn't getting scrapped or regenerated, but just switches what page elements are visible at any one time via display: none or block, so the part I am trying to figure out is really mostly about in what way I go about the Optimizely test setup itself - it's fine (and likely necessary) if all edits get applied at once.
This thing unfortunately has to work in IE9, so I can't use history.pushState to get pretty discrete urls for each step.
There's actually several ways you could go about doing this, and which option you choose will largely depend on what's easiest for you AND how you plan to analyze the data.
If you want to use Optimizely's analytics dashboard:
I would recommend creating one experiment which will activate a bunch of other experiments at different times. The activation experiment will be targeted to everyone and run immediately when they get to your wizard. The other experiments will be set up with manual activation and triggered by this experiment.
The activation experiment would have code like:
window.optimizely = window.optimizely || [];
function hashChanged() {
if(location.hash === 'publishing-step') {
window.optimizely.push(['activate', 0000000000]);
}
if(location.hash === 'checkout-step') {
window.optimizely.push(['activate', 1111111111]);
}
}
window.addEventListener('hashchange', hashChanged, false);
Or you could call window.optimizely.push(['activate', xxxxxxxxx]); directly from your site's code instead of creating an activation experiment and listening for hashchange.
If you want to use a 3rd party analytics tool like Google Analytics:
You could do this all in one experiment with code similar to above, but in each "if" section instead of activating an experiment, you could run your variation code that makes changes to the wizard and sends special tracking information to your analytics sweet for later reporting. You'll have to do your own statistical significance calculation for this method (as Optimizely's data won't be "clean"), but this method actually works out better usually if properly configured.
Alternatively you could use the method outlined above but still try to use the Optimizely analytics dashboard by creating custom events on your experiment and sending data to them using calls like window.optimizely.push(["trackEvent", "eventName"]);
This article may also be helpful to you.
You'll probably need to do this yourself, using Optimizely's JS API to trigger actions on their end and tell it what your users did: https://www.optimizely.com/docs/api
I was googling a lot in order to find a solution for my problems with UI Automation. I found a post that nice summarizes the issues:
There's no way to run tests from the command line.(...)
There's no way to set up or reset state. (...)
Part of the previous problem is that UI Automation has no concept of discrete tests. (...)
There's no way to programmatically retrieve the results of the test run. (...)
source: https://content.pivotal.io/blog/iphone-ui-automation-tests-a-decent-start
Problem no. 3 can be solved with jasmine (https://github.com/pivotal/jasmine-iphone)
How about other problems? Have there been any improvements introduced since that post (July 20, 2010)?
And one more problem: is it true that the only existing method for selecting a particular UI element is adding an accessibility label in the application source code?
While UI Automation has improved since that post was made, the improvements that I've seen have all been related to reliability rather than new functionality.
He brings up good points about some of the issues with using UI Automation for more serious testing. If you read the comments later on, there's a significant amount of discussion about ways to address these issues.
The topic of running tests from the command line is discussed in this question, where a potential solution is hinted at in the Apple Developer Forums. I've not tried this myself.
You can export the results of a test after it is run, which you could parse offline.
Finally, in regards to your last question, you can address UI elements without assigning them an accessibility label. Many common UIKit controls are accessible by default, so you can already target them by name. Otherwise, you can pick out views from their location in the display hierarchy, like in the following example:
var tableView = mainWindow.tableViews()[0];
As always, if there's something missing from the UI Automation tool that is important to you, file an enhancement request so that it might find its way into the next version of the SDK.
Have you tried IMAT? https://code.intuit.com/sf/sfmain/do/viewProject/projects.ginsu . It uses the native javascript sdk that Apple provides and can be triggered via command line or Instruments.
In response to each of your questions:
There's no way to run tests from the command line.(...)
Apple now provides this. With IMAT, you can kick off tests via command line or via Instruments. Before Apple provided the command line interface, we were using AppleScript to bring up Instruments and then kick off the tests - nasty.
There's no way to set up or reset state. (...)
Check out this state diagram: https://code.intuit.com/sf/wiki/do/viewPage/projects.ginsu/wiki/RecoveringFromTestFailures
Part of the previous problem is that UI Automation has no concept of discrete tests. (...)
Agreed. Both IMAT and tuneup.js (https://github.com/alexvollmer/tuneup_js#readme) allow for this.
There's no way to programmatically retrieve the results of the test run. (...)
Reading the trailing plist file is not trivial. IMAT provides a jUnit like report after a test run by reading the plist file and this is picked up by my CI Tool (Teamcity, Jenkins, CruiseControl)
Check out http://lemonjar.com/blog/?p=69
It talks about how to run UIA from the command line
Try to check the element hierarchy, the table can be placed over a UIScrollView.
var tableV = mainWindowTarget.scrollViews()[0].tableViews()[0].scrollToElementWithName("Name of element inside the cell");
the above script will work even the element is in 12th cell(but the name should be exactly the same as mentioned inside the cell)