on-premises data centers energy consumption - energy

I have a question regarding “traditional” on-premises data centers (e.g. manufactured by Dell, HP, IBM, Lenovo etc.). Maybe, do you know if there is, in these abovementioned data centers, “built-in” software which enable automatic reporting and saving data concerning energy consumed by servers of a data center and if such software exists in which way reporting is performed (i.e. by API, csv or json file) ???
I would like to get information about methods of reporting energy consumptions in on-premises data centers.

Related

How to sum hourly metrics into daily and monthly aggregates in Graphite & Grafana?

Suppose a system collecting hourly energy consumptions (Wh) from clients with power analyzers (i.e. a sensor that measures how much energy an electrical appliance consumes).
In detail, each client periodically publishes the energy value consumed by a device in the last hour. On the server side, this data is stored in Graphite (v1.2.0) and visualized in a Grafana (v6.5.2) dashboard.
In the dashboard, I can easily show hourly consumptions of a device as a line/bar graph. However, I need graphs to show total daily and monthly consumptions aggregating hourly values.
How can I do that using Graphite and/or Grafana without collecting extra metrics? Or is it at least possible or not?

simulation of creation of social network graph given present snapshot

I am using http://networkrepository.com/socfb-B-anon.php dataset for my analysis. I would like to do some analysis of how this present graph is formed from scratch. Is there any existing social network simulation framework for this kind of problem?
I am also open to use any other dataset if available. I would need the timestamp for every edge( nodes connected at).
The Barabási–Albert (BA) model describes a preferential attachment model for generating networks, or graphs. It iteratively builds a graph by adding new nodes and connecting them to previously added nodes. The new node is attached to some other nodes with a probability proportional to the degree of the old node with relation to the total number of edges in the graph.
This algorithm has been shown to produce graphs that are scale-free, which means the distribution of degrees follows a power law, which is a typical property of social networks.
This can be seen as a 'simulation' of a growing social network, where users are more likely to 'befriend' or 'follow' popular existing users. Of course it's not a complete simulation because it assumes a new user is done befriending or following other users right after they created an account, but it might be a good starting point for your exploration.
A timestamp for each edge or node creation can be generated by maintaining one during the creation process and increment it as you add more edges or nodes to the graph.
Hopefully this answer gives you enough terminology to further your research.

How to design a distributed system for "finding something within X miles"?

Question:
Design a distributed system to response the clients' query about "finding something within X miles".
If X is infinite, get all the "something" in the world (if they are all stored in your database)
You can think about two approaches:
when the number of potential results is small and number of queries big divide the space of coordinates between available machines and send queries only to machines which are responsible for areas which intersect with X-mile circle
when the number of potential results is big store objects dispersed, so that they are uniformly distributed on all machines (you can choose the machine by randomization or by objects' origin - it depends) and post every query to all machines and merge received results.
Further changes depend on getting more information about the problem nature.

data flow visualisation with real-time data

We are building a flow diagram for business alerting. The diagram gives importance to the data-flow and not the "Source" or "End" systems.
The flow diagram is dynamic (color and width of connectors change based on alerts) where each of the flow is driven by particular data unique to that flow.
We are currently making use of fusioncharts "Node charts" to construct it and is a data driven flow where the source/destination (from/to) etc is fetched from data.
BUT...
the fusion charts have one-to-one relationship only. i.e. is one connection between a node to another.
Our case is multiple connections between the nodes as the data flow is different.
I alternatively checked into various data visualization like http://www.visualcomplexity.com and could find that the transportation network maps (tube maps) are better to represent our data
hence
1. Can you suggest any good flow diagram charts with configurable objects?
2. Any tools to draw tube maps/transportation networks?
Just found out that d3.js is the best tool to visualize and automate data.

What is the bottleneck algorithm for medical imaging applications? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed last month.
Improve this question
What is the computational bottleneck algorithm for medical imaging applications? We are trying to figure out if there is a benefit to run these algorithms on regular cloud server instances or GPU accelerated server instances.
Unless the software has been specifically designed with GPU processing power in mind, GPU accelerated instances will be about the same performance as regular commodity server instances, only at a higher price.
I'm willing to gamble and say that the bottleneck of any algorithm, medical or not, imaging or not is the rate at which you can throw data at the CPU, and the number of cores, and the clock rate.
Get some fast CPUs, Insanely fast RAM, blindingly fast striped/mirrored storage, and do it that way.
I suspect that you'll probably find that running on "the cloud" is actually counter-intuitive, or at least counterproductive, as many cloud service providers don't tune their storage backends to cater for high performance computing, but more to providing a little bit of IO to the masses.
I think you'd be better off with owned dedicated hardware, that way, you can spend more time and money in efficiently tuning the hardware stack to match your software stack. Any cloud service provider (including Amazon) will give you some trade offs and compromises.
Oh, and don't forget about not putting all your eggs in one basket. What happens when Amazon goes offline, and nobody can examine any X-Rays, or the poor schmuck who put a heart monitoring application on Amazon Cloud instances, and Amazon went offline in a massive outage.
Aside from the compromises of cloud hosting, the problems of being redundant and resilient to provider outages, not putting critical infrastructure on the cloud, there's other questions surrounding the architecture of your application itself.. Will it scale linearly?
I bet it won't.
By benching a GPU-like implementation against Cloud Server instances, you can see huge FPS differences [1, 2] for operations on large (e.g., CR) images. However, on the other hand, the GPU can be occupied with a lot of memory and therefore delaying and giving continuously dropouts. Therefore, a Cloud Server solution could be more stable with not as many dropouts and a smoother feeling but with lower FPS.
[1] Zhang, Lequan, et al. "A high-frequency, high frame rate duplex ultrasound linear array imaging system for small animal imaging." IEEE transactions on ultrasonics, ferroelectrics, and frequency control 57.7 (2010).
[2] Miguez, D., et al. "A technical note on variable inter-frame interval as a cause of non-physiological experimental artefacts in ultrasound." Royal Society open science 4.5 (2017): 170245.