we would like to make sure that the MLFLOW experiment management platform fits our needs and workflow.
We work with image processing CNNs like Yolo, UNET, and RetinaNet based on an NVIDIA TAO framework.
What we actually need is a tool that concentrates on one place (in a nice and representative way comfortable for comparison) at least the three following things for each experiment:
a- chosen by user typical meta parameters that were used to train a network (such as batches, subdivisions, max batches, etc)
b- a link to the dataset the network was trained on, located on our cloud storage (such as one-drive, google drive or google cloud) or a list of filenames or a link to a file storage cloud or online drive suggested by MLFLOW service if there is such a thing.
c- a result of running the trained network - the number of detected objects
Thus the question is:
Does the MLFLOW fit our needs?
If not ill be glad if anyone could suggest a relevant alternative.
Thank you
I use Comet.ml and it addresses all 3 of your points.
With Comet, parameter tracking is as easy as calling the experiment.log_parameter function. You can also use the diff tool to compare two experiments by their hyper parameters and even group experiments by hyper-paremeters!
Comet has the concept of artifacts. You can upload your dataset as an artifact and version it. You can also have remote artifacts!
Comet has a feature call the Image Panel. This allows users to visualize their model performance on the data across different experiment runs. For the object detection use case, use experiment.log_image to log your images in which you have you drawn your model predicted bounding box on! You will then see in the Image Panel different experiments and how they each one of them are drawing their predictions side by side
I am looking for suggestion before I could plan / design an open source flutter package which enable developers to introduce distributed consensus to their application (basically a blockchain within mobile phones) along with their existing backend stack.
This package would work along with existing backend to generate identity, participate in validation of transaction, and store a copy of chain within their device. Later I am planning to use pluggable encryption based on developer’s need for enhanced security.
Keeping in mind of the best consensus as Practical Byzantine Fault Tolerant but PBFT/BFT has resource intensive PoW to do consensus which phones can't be, I am planning to implement PBFT based on technique explained in “Elastico” paper (review/summary is attached here [1]).
This is just an initial idea rolling in my mind for few hours now. I need to realise the feasibility this technique inside a mobile phone to form a network.
[1] https://muratbuffalo.blogspot.com/2018/03/paper-review-secure-sharding-protocol.html
Why do you want to build this
This package would help devs to integrate features of blockchain to their centralised app stack without rebuilding their backend. Talking about application part, consider situations which involve decision making and forming opinions, supply chain management, data provenance (integrating with IPFS)
Development of the plugin will take some real effort and time.
Not having done any serious feasibility study on the project but I believe this plugin will be useful for a longer run.
If there are some suggestions based on your experience in plugin development, pl share it here.
I'm using an application that is very interactive and is now at the point of requiring a real analytics solution. We generate roughly 2.5-3 million events per month (and growing), and would like to build reports to analyze cohorts of users, funneling, etc. The reports are standard enough that it would seem feasible to use an existing service.
However, given the volume of data I am worried that the costs of using a hosted analytics solution like MixPanel will become very expensive very quickly. I've also looked into building a traditional star-schema data warehouse with offline background processes (I know very little about data warehousing).
This is a Ruby application with a PostgreSQL backend.
What are my options, both build and buy, to answer such questions?
Why not building your own?
Check this open source project as an exemple:
http://www.warefeed.com
It is very basic and you will have to built datamart feature you will need in your case
Hi
we are expanding one of our projects in a major bank to include access via mobile devices. We are evaluating a few tools - inc. perfecto mobile, experitest and deviceanywhere.
From our initial evaluation perfecto and device anywhere cover a larger set of handsets inc feature phones. Experitest on the other hand is strong and simple to operate with smartphones(iphone, android etc).
Can anyone share experience from using these tools for large scale projects? we are mainly concerened re stability, ability to work with QTP and support considerations (support for new devices etc).
I have used DeviceAnywhere extensively. Perfecto, not that much, after a pretty dissapointing trial period. DA has support/add-ins for QTP and QC. Perfecto does not cover QC. Perfecto is not faster than DA, since most of their devices are in Israel, and not the US. DA has a few datacenters in the US and abroad, hence you have a better chance to get better performance. DA has an pretty long list of Enterprise and Carrier customers...while Perfecto seems like a very small company. Compare their website quality-it's pretty obvious which one looks more professional...You should try them both and make up your mind...
I have used all 3 platforms many times
Only Perfecto Mobile and DA are robust enough for real testers (at least for enterprise level).
DA have more devices but Perfecto are 100% web based, faster and MUCH cheaper. Both offer automation environments with pros and cons but Perfecto offers QTP integration and enhanced security solutions
Conclusion - both systems good, Perfecto cheaper, Perfecto much better for enterprises engaging in mobile testing.
Guido
Think of coupling a standard software remote control product with a standard software test robot (like QTP).
As an alternative, and being a mostly device-independent, but bitmap-dependent solution, you could use one of the many remote controls to bring the mobile's display contents to the desktop. Then, you'd "click around" in that remote control window using you favourite test robot.
Stupid that sounds? Well, it has its strong and its weak points:
If QTP is set for you, you'd be stuck on bitmap synchronization, no other useful GUI properties would be visible. However, if you have some QTP know-how on board, you could reuse all the know-how for test management integration via QC, test data addressing, and so on, scripting "art" like wait-for-the-right-thing, convert bitmaps to text, and so on. You could even "in real time" verify the results displayed on the mobile to stuff in the corporate backend, or research expected results in some central database after doing some transaction on the mobile -- all that would be easy since your test robot runs as part of the IT infrastructure all the time, so it has easy access to those resources. And those accesses could be done with all the comfort we got used to on PC-based test robots, like for example QTP's database checkpoint.
The positive aspect would be: Using such a scenario, you are largely independent of the mobile's technical details, and could support a lot of different devices by just using different sets of expected bitmaps. (Provided the workflows are exactly the same, which of course is not always the case.)
If you don't have to buy an extra test robot, this solution might be unbeatable cheap. Most Windows mobile devices for example can be used with Microsoft's free remote control, and there are lots of commercial vendors offering remote control functionality for a variety of devices in one package.
Also, you could develop test scripts using emulators emulating the mobile device, because the test robot would not know the difference between a display being fetched from the real thing, or being shown by the emulator.
I've done all that with various remote controls and PDA/smartphone devices, using CitraTest or QTP as the test robot. I was very happy not having to mess around with yet-another-specialized tool, or even more than one of them, each with their own language, or methodology.
Biggest hurdles besides the ones already mentioned were:
find a remote control that is versatile, fast and reliable
find a way to let the mobile use its "normal" communication path (for example, cellular connection) for all applications while, for performance reasons (and to minimize side effects), the remote control is connected through a direct connection (USB, propretiary synch cable, network...whatever the mobile supports).
create a scripting "standard" which is sufficiently exact to keep test robot and mobile app execution synchronized while avoiding re-capturing expected bitmap for all supported devices too often (this can be partly automated)
timing problems -- when you are on the bitmap level, it is hard to tell if you waited "long enough" for some message to appear, disappear, or whatever.
cover exotics like "app continues only after you took a photo with the mobile camera". Generally speaking: Control the built-in periphery (what a contradiction...) of the mobile (in my case, I had to make the barcode scanner "see" specific images -- quite difficult and usually very device-dependent to automate)
It's feasable, though, and such a solution can be very stable and realiable, with a sufficient grade of cost-efficiency in terms of test maintenance effort (depending on what changes how frequently in the app-to-test, of course).
jQuery runs a lot of tests automatically on both feature phones and smart phones, maybe you can use their test system. As a side note, check if jQuery mobile is for you, it seems very cool.
To the best of my knowledge Perfecto Mobile has made some major improvments to its offering and currently offers some major benefits over the others, including price. In the last few months they've added popular devices like Lenovo nePaone. You can see the full list om their website:www.perfectomobile.com. Since they use differentontrol technology than Device Anywhere they are capabable of supporting new devices really quickly. Regarding stability and QTP they also have many advantages over the others. For instance, tools to record your own specific user scenarios and test them repeatedly across devices - this is a great automation tool for large scale projects.
If you are testing bank application you should consider the security issue.
How do you protect your application and application data. Once you release a phone, someone else can get control over it.
My recommendation is to use the on-site capabilities I believe all the above solutions have.
I do a sort of integration/stress test on a very large product (think operating-system size), and recently my team and I have been discussing ways to better organize our test workloads. Up until now, we've been content to have all of our (custom) workload applications in a series of batch-type jobs each of which represent a single stress-test run. Now that we're at a point where the average test run involves upwards of 100 workloads running across 13 systems, we think it's time to build something a little more advanced.
I've seen a lot out there about unit testing frameworks, but very little for higher-level stress type tests. Does anyone know of a common (or uncommon) way that the problem of managing large numbers of workloads is solved?
Right now we would like to keep a database of each individual workload and provide a front-end to mix and match them into test packages depending on what kind of stress we need on a given day, but we don't have any examples of the best way to do more advanced things like ranking the stress that each individual workload places on a system.
What are my fellow stress testers on large products doing? For us, a few handrolled scripts just won't cut it anymore.
My own experience has been initially on IIS platform with WCAT and latterly with JMeter and Selenium.
WCAT and JMeter both allow you to walk through a web site as a route, completing forms etc. and record the process as a script. The script can then be played back singly or simulating multiple clients and multiple threads. Randomising the play back to simulate lumpy and unpredictable use etc.
The scripts can be edited and or can be written by hand once you know whee you are going. WCAT will let you play back from the log files as well allowing you simulate real world usage.
Both of the above are installed on a PC or server.
Selenium is a FireFox add in but works in a similar way recording and playing back scripts and allowing for scaling up.
Somewhat harder is working out the scenarios you are testing for and then designing the tests to fit them. Also Interaction with the database and other external resouces need to be factored in. Expcet to spend a lot of time looking at log files, so good graphical output is essential.
The most difficult thing for me was to collect and organized performance metrics from servers under load. Performance Monitor was my main tool. When Visual Studio Tester edition came up I was astonished how easy it was to work with performance counters. They pre-packaged a list of counters for a web server, SQL server, ASP.NET application, etc. I learned about a bunch of performance counters I did not know even exist. In addition you can collect your own counters too. You can store metrics after each run. You can also connect to production servers and see how they are feeling today. Then I see all those graphics in real time I feel empowered! :) If you need more load you can get VS Load Agents and create load generating rig (or should I call it botnet). Comparing to over products on the market it is relatively inexpensive. MS license goes per processor but not per concurrent request. That means you can produce as much load as your hardware can handle. On average, I was able to get about 3,000 concurrent web requests on dual-core, 2GB memory computer. In addition you can incorporate performance tests into your builds.
Of course it goes for Windows. Besides the price tag of about $6K for the tool could be a bit high, plus the same amount of money per additional load agent.