Dows mlflow support real-time metrics viewing - real-time

People use TensorBoard to view real-time update to the model metrics, is this feature also available with mlflow?

I'm not very familiar with mlflow. But with the comet_ml library you can watch during training the real-time updates.

Related

On Db2 v11.1, how do we get or setup the notification for DBA team if there is any hang or slowness situation in offshift working hours?

On Db2 v11.1, how do we get or setup the notification for DBA team if there is any hang or slowness situation in off shift working hours?
The answer depends on the external monitoring and alerting solution you deployed, and how you configure that tooling in your environment.
This application layer tooling is not built into Db2-LUW, although APIs exist in Db2-LUW for such tooling to get the data it needs in order to operate.
IBM and several third parties offer solutions for real time monitoring and alerting in this space. Many cover app-servers, web-servers, database layers, networks and operating-system layers and have different alerting configurability. Many have plugin type architecture with plugins for Db2-LUW monitoring. Do not use stackoverflow for product recommendations however.
For "slowness", this is only meaningful to measure usually at the application layer, in terms of response times and other metrics etc.
For database-hangs, IBM offers a db2-hang_detect script that tooling can orchestrate , requires careful interpretation and even more careful testing.

Is there an ETW provider for Windows Mixed Reality events?

My goal is to include VR specific events in an ETL capture file to be able to analyze some performance issues.
There are custom providers for Oculus and SteamVR runtimes, but I could not find any documentation about ETW events produced by WMR runtime.
I could not identify any obvious candidate in the output from logman query providers.
There is a provider that might be what you're after:
ProviderGUID: 60d6e217-d25b-504f-83d5-c2deb6a854e5
ProviderName: Microsoft.Windows.Holographic.MixedRealityMode
ProviderGroupGUID: 4f50731a-89cf-4782-b3e0-dce8c90476ba
AssociatedFilenames: ["spectrum.exe"]
I have no idea where the list came from.
For performance debugging WMR headsets apps, it is recommended you use the new Tools in Windows Device Portal. And refer to this broader discussion on performance help: Understanding performance for mixed reality
Currently, the ETW provider GUID of WMR runtime doesn't yet public on our official documentation. If you already have a specific performance issue on WMR, it is recommended to open a support ticket through this link: http://aka.ms/mrsupport for a one-to-one support service. We will also forward this one about the ETW provider to the product team to see if it’s feasible to clarify that in the official docs.

Best approach to construct a real-time rule engine for our streaming events

We are at the beginning of building an IoT cloud platform project. There are certain well known portions to achieve complete IoT platform solution. One of them is real-time rule processing/engine system which is needed to understand that streaming events are matched with any rules defined dynamically by end users with readable format (SQL or Drools if/when/then etc.)
I am so confused because there are lots of products, projects (Storm, Spark, Flink, Drools, Espertech etc.) in internet so, considering we have 3-person development team (a junior, a mid-senior, a senior), what would it be the best choice ?
Choosing one of the streaming projects such as Apache Flink and learn well ?
Choosing one of the complete solution (AWS, Azure etc.)
The BRMS(Business Rule Management System) like Drools is mainly built for quickly adapting changes in business logic and are more matured and stable compared to stream processing engines like Apache Storm, Spark Streaming, and Flink. Stream processing engines are built for high throughput and low latency. The BRMS may not be suitable to serve hundreds of millions of events in IOT scenarios and may be difficult to deal with event-time-based window calculations.
All these solutions can be used in Iaas providers. In AWS you may also want to take a look at AWS EMR and Kinesis/Kinesis Analytics.
Some use cases I've seen.
Stream data directly to FlinkCEP.
Use rule engines to do fast response with low latency, at the same time stream data to Spark for analysis and machine learning.
You can also run Drools in Spark and Flink to hot-deploy user-defined rules.
Disclaimer, I work for them. But, you should check out Losant. It's developer friendly and it's super easy to get started. We also have a workflow engine, where you can build custom logic/rules for your application.
check out the Waylay rules engine built specifically for real-time IoT data streams.
In the beginning phase Go for the cloud based IoT platform like predix,AWA,SAP or Watson for rapid product development and initial learning.

Visual analytics on bluemix

How can I run visual analytics on historical IoT data on #Bluemix?
There are services like Real-time Insights and Streaming analytics for real-time data analytics, but is there a service for historical data analytics and visualization?
There are a few different options depending on your use case, experience and data source. For example:
You can use Jupyter notebooks or RStudio on Data science experience. You can use R, python or scala to analyse data and create re-runnable reports. You can also use spark which is great if your data volumes are large or you want to use spark's vast number of connectors to different data sources. This approach is ideal for data scientists with coding experience.
You can use Watson Analytics if you want to do analysis without data science or coding skills. This environment is more for ah-hoc analysis rather than reporting.
If you are looking to do reporting Cognos has excellent visualization capabilities and reports can be created by users who don't have coding skills.
There are a number of other options, but in my experience the above three tend to be the most common.

Event-based analytics package that won't break the bank with high volume

I'm using an application that is very interactive and is now at the point of requiring a real analytics solution. We generate roughly 2.5-3 million events per month (and growing), and would like to build reports to analyze cohorts of users, funneling, etc. The reports are standard enough that it would seem feasible to use an existing service.
However, given the volume of data I am worried that the costs of using a hosted analytics solution like MixPanel will become very expensive very quickly. I've also looked into building a traditional star-schema data warehouse with offline background processes (I know very little about data warehousing).
This is a Ruby application with a PostgreSQL backend.
What are my options, both build and buy, to answer such questions?
Why not building your own?
Check this open source project as an exemple:
http://www.warefeed.com
It is very basic and you will have to built datamart feature you will need in your case