Delete device connector in Visio's Ethernet shape - visio

I have a really annoying problem. I am trying to create a Visio 2016 drawing with an Ethernet shape (Network -> Network and Peripherals) and for the life of me, I cannot seem to delete device endpoints after I drop the shape upon my canvas. I mean these things:
Does anyone have an idea on how to delete these?

Select the shape and you'll see a (yellow) control handle for each point. Drag it inside the bounds of the core shape and it will disappear.

Related

How to disable image detection if there is already a detected image

I have the, in my opinion, simple problem of disabling image detection with the AR Camera. I have the problem, that my app detects an image from the image library and spawns an object etc. everything according to plan.
But the problem is that if move the camera over another detectable image, it recognizes it. This is bad not because it spawns something additionaly but because you can "collect" the images in my app, so it unlocked the other detected one even though it shouldn´t.
So how can I disable image detection without turning off the AR-Camera?
I so far tried to simply disable the "ARManager" and the "ARTrackedImageManager" script (.enabled=false), but it didn´t solve my problem, because the app still detects other images.
Hope I could explain what my question and problem is properly. Any help is appreciated!
It really depends on what library you're using to detect the image. Generally, most marker tracking libraries will create a marker object in your Unity scene. You can disable these marker objects after you find one, and only leave the marker you're interested in. Make sure you also set the number of tracked images to 1 so you won't accidentally find two markers in one frame.

detect coordinates of some element on image

Please tell me how to solve this problem.
Where to start and which way to go.
I have an image with some buttons :
How can i detect coordinates of blue round button for example?
The difficulty lies in the fact that these are not application buttons, but just a picture on the desktop.
I understand that this is a vast and complex question, but tell me at least the right way.
It will be useful to many people.
The first thing I can imagine is to do a desktop screen, and then try to detect pixels with blue color.
You don't need to do manual image detection because Apple's vision framework already does this. You can use it to detect rectangular regions, detect text, or recognize and image within an image, depending on your needs.
See Detecting Objects in Still Images

Displaying ARKit nodes in relation to real objects

I am trying to draw a box that can help someone understand the dimensions of an item, but I keep having the issue that since I first need to recognize a plane when I put my physical item on top of the plane, my box gets drawn in front of the item.
Is it possible to somehow overcome this?
#John Scalo is right, your problem is not having to first detect a plane, but it's that your render engine doesn't know that part of your green box frame is occluded (hidden) by a real-world object.
"…to somehow overcome this"
Yes, and by doing so you might be "solving" your original problem—help someone understand the dimensions of an item.
(Depending on your choice of render engine, e.g. SceneKit) You can add an invisible 3D object that has the same dimensions as the real-world object; so the render engine will "know" that some parts of your box frame are behind this (for the user invisible) 3D object. Therefor, you can tell it not to draw those parts of your box frame, which will give the illusion (borrowing from Apple here) that your soda can has the box around it.
These workarounds are inaccurate, but maybe their accuracy is enough for the level of realism you are trying to achieve:
Option 1: 1. After detecting the desk surface, place a semi-transparent 3D object over the soda can and then resize it (gestures/buttons your choice) until it's about the dimensions of the soda can. 2. Confirm that you're done, and just don't draw a texture on it at all just let it occlude the green box frame.
Option 2: Hold your device near the edges of the soda can and add "enough" ARAnchors to be able to create a "bounding shape" that (again) can be used to capture the real-word object and occlude that.
Option 3: (intense, and perhaps the least accurate) Use your finger to "brush" over the object from various angles, and on each touch perform a hit test (hopefully the top/nearest hit is a part of your soda can) and build up a "bounding shape" that way.
Option X: any combination of 1 - 2 - 3.
Good luck, there's lots of people trying to work around this device/ARKit limitation at them moment, so keep your eyes open for good ideas.
The problem you're dealing with is called occlusion, and ARKit doesn't (currently?) include occlusion support. Maybe some day soon iPhones and iPads will begin to ship with LIDAR (or similar), in which case ARKit will be able to detect objects in the scene, making occlusion much easier.

How to draw marks on Unity 3D plane?

I am new to Unity. Sorry if I have a beginner style of question.
I want to implement a 3D Chess game in Unity. I have already implemented a C++ shared library that contains the whole AI thing. I have used this library in WPF and Android and it is tested perfectly. Now it is Unity's turn.
When the user selects a piece, next moves of it should be shown.
These marks can be a light or image. Circular or rectangle.
One way to do this is to have 64 marks per each square of the Chess board and change their visibility programmatically.
The other way, which I personally prefer, is to draw the marks programmatically. But I don't know how to draw on my Chessboard plane.
Please guide me with it.
FINAL RESULT (Just a sketchup!)
STEP BY STEP:
(I assume you have already had a chessboard)
1. Create a Material & configure it like in the below image. Note that the albedo green is 50% transparent:
2. Create a Quad & assign it the newly created Material above. Then set
it up like in the below image:
3. Now we will add the glow effect. First, we need to turn off the
Anti-Aliasing by switching to Good Quality instead of Fantastic.
4. Second, we need to enable HDR in the main camera:
5. Third, we need to import the Image Effects package. This package
is part of the Standard Assets that is shipped with Unity. It is
completely free. Get it here if you haven't.
https://www.assetstore.unity3d.com/en/#!/content/32351
You only need the Image Effect package.
6. Now add the Bloom effect to your main camera.
7. That's it! If you need to hide it via code then get the reference to
it and execute this line of code:
yourQuad.SetActive(false);
See more here:
https://docs.unity3d.com/ScriptReference/GameObject.SetActive.html
8. Finally, duplicate that quad to create 64 ones & position them
properly. There are 2 tricks that can help your life in hell a lot
easier:
To quickly duplicate a group of objects: select all of them and press: Ctrl + D
To enable edge-snap: select your quad and hold down V then hover your mouse over the quad's vertex. You will see a white square around it. Drag that vetex and see the magic.
9. From this on, it is your game logic to implement. You could store
all the quads in a 2-dimensional array (matrix) and manipulate it
yourself, that it all I can think of. Goodluck!

How to correct middle coordinate between two coordinates

I have app that draw red line on the apple map while user drive car with iPhone.
But sometimes when driver drives too fast on corners I don't get all coordinates so it connects two points with straight line (and I get flying car :)).
Is there any solution how to make this more precise.
On the image I show what happens.
Large error on this image is at 'terminal ave and Clark...'
As user drive car I store each new coordinate in database (local) and based on that I draw route.
But this errors drive me creasy.
Any idea or example how to fix this error on corners.
I think you can not do anything precise. The only thing you can do is to "guess" the route the driver got.
You can soft this "straight line effect" by some kind of interpolation between the two points which are too far away one from the other.
In video games industry, when an autonomous character is moving from one point to another, to avoid him just going straight forward steering Behaviors are used so he go in a soft and rounded way.
Maybe you read about them and see if u can apply them to your project.
Steering behaviours