How to give Perspective access without Cube access? - xmla

in a Cube, I have a calculated measure [Nb>4] depending on a measure [Nb], filtering only the values above 4.
We don't want users to see the underlying measure [Nb], so I defined a perspective which hides it using -[Measures].[Nb]
I am looking for a way to give access to a perspective without giving access to the cube it depends on... (because using xmla (Excel), users can currently see the perspective and the cube).
I tried to do that using the roles definition module, but it seems not to be possible.

You can define the 'default' perspective (see first image). This perspective is a special one that applies visibility to all cube.
It should be possible then to hide your measure
-[Measures].[Nb]

Related

Best Practice to display local markers and a wider area of points of interest markers?

I've created a base layer and 6 different overlay (Points of Interest) layers for a leaflet map.
The base layer of markers can appear on the map almost anywhere in the world, but I want the POI layers to appear only if they are in the same area (mapBounds) of the base layer. Probably the screen size.
All the data is pulled from a MySQL database and using Ajax I create the various sets of markers from two different tables, base and poi. This much is all done and working, you can see it at https://net-control.us/map2.php. The green and blue markers are from the base table, other markers are currently selected for view by clicking on the appropriate icon in the lower right. The only one active at the moment is 'Fire Station'. But if you zoom out far enough you will see additional fire stations in the Kansas City area, and in Florida. Those sets are not needed.
After the query runs I create a fitBounds variable of the base layer and another poiBounds for the poi layer. But I'm not sure I need the poiBounds. The number of base markers is generally less than 50 for the base query, but if all the poi markers are pulled world wide that number could be very large.
So I'm hoping someone can help me determine a best practice for this kind of scenario and maybe offer up an example of how it should be done. Should I...
1) Download all POIs and not worry about them appearing outside the base bounds layer? Should I inhibit them from showing in the javascript or in the SQL? How?
2) If I inhibit the unwanted points from SQL do I test one POI at a time to see if its included in the base bounds? How? Are there MySQL functions perhaps to work with this kind of data?
I'm fairly new at leaflet maps and would appreciate examples if appropriate.
2) If I inhibit the unwanted points from SQL do I test one POI at a time to see if its included in the base bounds? How? Are there MySQL functions perhaps to work with this kind of data?
You probably want a column of type POINT, a spatial index on such column (which internally is likely to be implemented as an R-Tree), and spatial relation functions on your SQL query to make use of that index.
Start by reading https://dev.mysql.com/doc/refman/8.0/en/spatial-types.html. Take your time, as spatial databases, spatial data types and spatial indices work a bit differently than their non-spatial equivalents.

How can I change hand-rays in my MRTK v2 project for HoloLens 2 to parabolic instead of linear?

My HoloLens 2 project has content that is arranged as such I cannot target colliders with the existing hand-rays. I used to target my content with the head-gaze, but with hand-rays being lower on the body it is more difficult to reach the content that I want to select. I believe I would benefit from a parabolic selection ray, similar to those used when teleporting in Mixed Reality to reach surfaces above the participant.
The primary method of interacting with my content would be via a parabolic ray. There are instances within my application where I might change modality to focus on a menu system from close or far, and when I am far I'd like to change to a linear ray. So, having this capability to change the type of ray exposed via code would be preferred.
My project is employing the MRTK v2, and the standard linear hand-rays are functioning.
I would like to be able to change the type of ray being used in the Unity inspector, and to be able to change the style via code during run-time. I'd like to have control over the arc of the ray, as the scale of my content may impact the need for a different arc and min/max distance.
You can modify the DefaultControllerPointer prefab to use a Physical Parabolic Line Data Provider instead of a Bezier Line Data provider. This will distort the line used by the pointer to be more parabolic.
Before:
After:
Note that I removed the pink components and added the green components.
You will also want to increase the line cast resolution of the pointer from 2 to something larger, this means that the ray used to query what you have hit will have higher resolution:
And you may want to increase the resolution of the MR Line Renderer itself.
Demo of parabolic hand pointer:

How do I rearrange the execution order of solvers in Mixed Reality Toolkit?

MRTK's Solver documentation says that you can "stack" solvers on the same object and their effects become cumulative.
The Known Issues section implies that you can control which order the solvers are evaluated in, and that this can cause differences in behavior.
How do I change the evaluation order for two solvers on the same GameObject?
They update in the order they appear on the game object in the inspector, top to bottom (as long as GetComponents continues to return them in that order!) You can rearrange component order by clicking the gear icon for a component in the inspector, and choosing 'Move Up' or 'Move Down'. Hope this helps!

How to find all layers in Mapboxgl ? Ultimately I want to show custom layer only on water and not on land

I created a custom circle layer. I want to show this layer only on water and not on land. I managed to do the opposite (ie: showing the layer on land and not on water) using below command. Refer this image for better understanding
map.moveLayer('polygon','water');
Now I need to know the land layer which is used by mapboxgl so that I can call function map.moveLayer('polygon','land'); to achieve what i want.
I need help to find the different layers present in the mapboxgl-streets map. But unfortunately, Mapboxgl doesn't have map.eachLayer function.
You can use the Map#getStyle method to get a serialized representation of the entire style including the layers.
map.getStyle().layers
It depends on the map style you're using. In general, you either have to look at its source or load it in Mapbox Studio to identify the correct layer name. Also keep an eye on https://github.com/mapbox/mapbox-gl-js/issues/4173.
Just to add to Lucas' answer (which is still correct), map.getStyle().layers provides all layers in the style, including ones you have explicitly added (via map.addLayer()), and those that are included in the style (which could be a lot). Careful how you filter through these. For my case, I created arrays to hold the layers I created myself, to make future iteration simpler.

Object Tracking in non static environment

I am working on a drone based video surveillance project. I am required to implement object tracking in the same. I have tried conventional approaches but these seem to fail due to non static environment.
This is an example of what i would want to achieve. But this uses background subtraction which is impossible to achieve with a non static camera.
I have also tried feature based tracking using SURF features, but it fails for smaller objects and is prone to false positives.
What would be the best way to achieve the objective in this scenario ?.
Edit : An object can be anything within a defined region of interest. The object will usually be a person or a vehicle. The idea is that the user will make a bounding box which will define the region of interest. The drone now has to start tracking whatever is within this region of interest.
Tracking local features (like SURF) won't work in your case. Training a classifier (like Boosting with HAAR features) won't work either. Let me explain why.
Your object to track will be contained in a bounding box. Inside this bounding box there could be any object, not a person, a car, or something else that you used to train you classifier.
Also, near the object, in the bounding box there will be also background noise that will change as soon as your target object moves, even if the appearance of the object doesn't change.
Moreover the appearance of you object changes (e.g. a person turns, or drop the jacket, a vehicle get a reflection of the sun, etc...), or the object gets (partially or totally) occluded for a while. So tracking local features is very likely to lose the tracked object very soon.
So the first problem is that you must deal with potentially a lot of different objects, possibly unknown a priori, to track and you cannot train a classifier for each one of these.
The second problem is that you must follow an object whose appearance may change, so you need to update your model.
The third problem is that you need some logic that tells you that you lost the tracked object, and you need to detect it again in the scene.
So what to do? Well, you need a good long term tracker.
One of the best (to my knowledge) is Tracking-Learning-Detection (TLD) by Kalal et. al.. You can see on the dedicated page a lot of example videos, and you can see that it works pretty good with moving cameras, objects that change appearance, etc...
Luckily for us, OpenCV 3.0.0 has an implementation for TLD, and you can find a sample code here (there is also a Matlab + C implementation in the aforementioned site).
The main drawback is that this method could be slow. You can test if it's an issue for you. If so, you can downsample the video stream, upgrade your hardware, or switch to a faster tracking method, but this depends on you requirements and needs.
Good luck!
The simplest thing to try is frame differencing instead of background subtraction. Subtract the previous frame from the current frame, threshold the difference image to make it binary, and then use some morphology to clean up the noise. With this approach you typically only get the edges of the objects, but often that is enough for tracking.
You can also try to augment this approach using vision.PointTracker, which implements the KLT (Kanad-Lucas-Tomasi) point tracking algorithm.
Alternatively, you can try using dense optical flow. See opticalFlowLK, opticalFlowHS, and opticalFlowLKDoG.