Is there a decent eye tracking package to replace the mouse for code editing?
I want to free up the mouse, but keep using my keyboard for editing code.
Having done some research on it, I concluded that proper eye tracking hardware is expensive. Using a webcam or high resolution video camera seems to be the most viable option.
Unfortunately, image-based tracking (as opposed to infra-red tracking) restricts the accuracy, and so not all features might be practical.
Desired eye-tracking IDE features:
Page scrolling
Tab selection
Setting cursor position
Selecting gaze-focused text with keyboard
A similar question recommends Opengazer for webcams, but I am particularly interested in speeding up basic text-editing. Any recommendations are appreciated, especially if you have experience with eye tracking and practical use cases.
The kind of accuracy you're looking for is pretty difficult to achieve (Since text tends to be pretty small).
IR tracking is actually pretty easy to accomplish. A few IR LEDs and an IR camera (which is really just a normal camera with different filters) and your pupil lights up (This can be done with under $100, more if you want a better camera though).
It's the head tracking that might be more of an issue.
You end up with quite a few degrees of freedoms that you need to track and your inaccuracies will just build up.
I'm pretty sure there is no out-of-the-box solution for problem, but on eyewriter.org there are really nice instructions how to build your own eye-tracker. It's accurate enought to let someone "draw" graphities using only his eyes. so it should be possible to convert the eye-movements into mouse-events.
It can be done reasonably accurately (al la this article on how people read code) but i've never seen a commercial product that does what you're asking for
Maybe take a look at Emotiv's headsets, they use thought patterns to perform tasks. They're designed for games but you can probably repurpose it for normal tasks
Re text cursor placement, Lightning (While I have not worked on this particular feature, I have previously contributed to the Text 2.0 project as a student) which is described in this paper:
Universal eye-tracking based text cursor warping
will place the text cursor at the most salient target in the neighborhood of the gaze position reported by the eye tracker.
However, you need a Tobii eye tracker that supports the TET API. You might want to contact Tobii to verify that the Tobii X2-30 eye tracker which costs < $10k is compatible.
Just use vim. Do more with the keyboard, less with the mouse.
Personally I had an issue always having to reach for a normal mouse, looked at various option eyetracking/voice/touchscreen and ended up changing the keyboard to a IBM Trackpoint end result being my hands never leave the Keyboard and my typing speed and accuracy improved due to not having to reposition my right hand.
Eye Tribe has a $99 consumer-level eye tracker that is available now.
“Using a webcam or high resolution video camera seems to be the most
viable option.”
Eye Tribe is a spinoff of Gaze Group, a research group located at the IT University of Copenaghen. The people of Gaze Group developed the open-source ITU GazeTracker software, which allows people to turn low-cost webcams into eye trackers.
http://www.gazegroup.org/downloads
Upon looking at the “downloads” section for Gaze Group, it seems that there are already some eye tracking applications to do some basic actions.
melhosseiny mentioned the Text 2.0 framework for creating eye tracking apps using HTML, CSS and JavaScript, and the Universal eye-tracking based text cursor warping feature for placing the text cursor at the most salient target.
Eye Tribe has its own SDK, but those things above could help if they work with Eye Tribe.
Related
I want to make an ocean simulation that is physically accurate.
The height and speed of the waves should be controlled by the keyboard at runtime.
In the ocean, there needs to be a boat that either moves along a path or is controlled by the keyboard.
So far I have made this simulation in Blender:
https://youtu.be/LJ6ncxv-k7w
The problems are as follows:
1. There is no collision with the ocean
2. There are no controllers for the boat's movement
3. I am able to control the waves, but not at runtime
I thought about switching to Unity because the user interface is obviously better, as it is a game engine. I do not want to use Blender's game engine as its future is uncertain at this point.
After reviewing the various Unity water simulation plugins, I came to these conclusions:
1. the buoyancy is great in most of them, such as in Aquas and SUIMONO
2. None of them seems to offer a physically realistic collision with the boat.
3. they do offer wave height control, but not much else as far as wave properties go.
4. Some of the plugins can be combined to get closer to satisfactory results.
My question is:
Should I go with Unity completely?
It seems perfect for my user control needs, but the plugins are lacking in the collision aspect. I came across this video, but no tutorial: https://www.youtube.com/watch?v=T0D_vrYm4FQ
Even if there was one, how could I combine it with the plugins?
Is there a way to build the scene in Blender and then import it into Unity?
Would I be able to control the waves and boat after importing them?
Thank you very much for your time and knowledge.
if you really means an ocean, i suggest you to check out NVIDIA WaveWorks. it's a C library and doesn't have an officially integration with Unity3D, but since you go this far for it, i guess maybe you'll have enough courage to trying make it into a useable plugin yourself.
The player carries the flashlight so it is moving all the time. I'm aware of using a spotlight to make a flashlight when developing for the PC but it doesn't work for android. I have tried searching about it and all i have come across is creating a dynamic material that's applied to a certain area to give the illusion of a flashlight and it doesn't look good at all. So i would like to know if there is any other way to achieve this.
I think one option would be to use post processing. I am not sure is this better than using materials, but it is different (perhaps easier) way.
Here is example that I made quickly (obviously you would need to fine tune it):
fake light GIF
This contains ambient light and post processing effect.
Yellow area in middle that you see as light is not point light, it is just effect.
It is possible to change "light" area/intensity/color/... as well as overall darkness.
Also, worth mentioning, I made this quickly since I had already somewhat similar post processing effect, I just adjusted it so it look like flashlight.
You can find more information here:
Post Process Effects
Post Process Materials
Facebook released a demo video of their Surround 360 technology a few days ago, called "Here and Now": https://www.facebook.com/facebook/videos/10154659446236729/
Apparently they are using their proposed cubic mapping perspective for this. Can someone familiar verify that?
When on my Gear VR and I rotate my head, I notice a slight quality improvement. So also, does anybody know if they are using an adaptive view-aware streaming such as DASH or something for that (which will be impressive)? I am assuming it is not first downloaded and played, so maybe this is not due to the rendering.
Facebook uses pyramid encoding. They put a sphere inside a pyramid so that the base of the pyramid is the full-resolution FOV and the sides of the pyramid gradually decrease in quality until they reach a point directly opposite from the viewport, behind the viewer. That explains why, when you turned your head with the GearVR on, you noticed a quality change. They don't use MPEG-DASH, yet.
https://code.facebook.com/posts/1126354007399553/next-generation-video-encoding-techniques-for-360-video-and-vr/
Is there any program that allows custom gestures recording and exporting?
Of course custom gestures for Leap Motion.
The pre-made gestures are not enough for me to make the app.
I tried this old system:
LeapTrainer
However, I have a problem on importing and exporting, and the data exported seems not useful out of LeapTrainer.
Update 1: I tried to find gadgets from Unity Asset Store, but to no avail. Can anyone suggest some tools/SDKs? My main purpose is to use gestures as dynamic slashing(vertically/horizontally/diagonally).
Please anyone can help me?
I started the same way as you did, but i end up building my own gestures base on the API outputs.
Its not that hard you just need to think it a bit.
For example working with fingers, isExtended and Angle between them helps alot.
For the palm you can use the GetPosition and where is pointing at.
So you can do if palm pointing to my face and the hand is open(base on fingers) you will mimic the ARM HMD menu that you see on the LM samples out there.
Or if IndexFinger is extended and the thumb draw a gun on your hand, if the thumb angle is < 10 make the gun shoot.
I truly recommend you to go that way, it will help to expand your knowledge on the device API. 3rd party tools might work but you need to learn how to use them, so better spend that time learning the LM API.
I'm working on a 2D game (kind of like a top down space shooter) for the iPhone using an engine very similar to cocos2d (not exactly though) on OpenGL ES. I'm trying to figure out how I'm going to do collision detection.
All the ships for my game are images, and the game will load the image as a texture onto the screen. I've got very very simple detection going already that basically just takes the rectangles of the images and checks to see if those collide and can do that just fine.
But, of course the ship isn't perfectly taking up the entire rectangle so there is whitespace in there. So my question is how am I supposed to account for that whitespace? Do I have to have the matrices of the ships stored? Or is there another way? I've also heard of possibly using the Chipmunk physics engine for collision detection? How would that work?
(1) regarding Chipmunk, the short answer is yes you should immediately download chipmunk, donate something to the bloke, and start learning about it.
Working with that for a day or so will basically answer all the questions you have. If you want to work with physics games you're going to need to get in to it.
(2) you ask about using an approximation ("just" a rectangle) instead of something more accurately shaped like your spaceships. In fact, you'll be perhaps amazed to learn, that is precisely how it is usually done in all your famous big-name games you've played since we were all kids! Indeed sometimes you might use little more than A DOT (!) to detect collisions.
What you'd probably do in production is try a more complicated model, and play with it for a few hours and see, is it actually any better to play with than your simple dot or rectangle model.
If you do want to make a more complicated model -- just make one! Build it up from three or four rectangles using your current system. Try them "all against each other", and have "one big one to check first" to see if it is even anywhere near each other (sort of a simple spatial hashing).
You will find that when you do it with Chipmunk, which as you now know you have to immediately begin after reading this message, you just build it up the same tedious way. It's not a magic bullet. But if you were going to use a "more complicated model" yes it is better to go with something standard, chipmunk, to do the work in - it will get done quicker and better. There is heaps to learn and you should hop to it!
(3) Unity is not just for 3D Finally if you want to do it the smart-ass grown up way, you'd have to use Unity3D which will let you access the very metal, the Nvidia physics on the chipset. Note that unity works perfectly for 2D games also - you just click one button in unity to use a 2D projection (many brand-name ifone 2D games are done exactly like that).
If you use that approach, you can (if you want) have "absolutely exact" physics, with every nook and cranny of your model modelled.
What is the downside to doing this? Ah hah ... well the thing is, you need superb actual 3D models of all the stuff in your game! (Like you see them building in the "how we made the movie" special features that come with your favourite Pixar blu-ray.) To do that you need things like autodesk, maya and the like. you would quite likely buy some models ready-made from a digital prop shop (no need to build "a chair" as it has been done 1000 times already and you can buy one for ten dollars).
(Unity3D is completely free to use for a few months while you see if it can make you money.)
Incidentally on the Chipmunk front --- you can just use Corona which is ridiculously easy to use and has chipmunk-like physics completely built in with zero effort on your part! You could have the whole game done in less time than it took to write this email. You could be selling your game already and thinking up the next one. Or, you could use "Cocos" which indeed has a chipmunk-like physics library built-in .. personally (just me) I do not like and won't touch cocos - but of course many games use it.
(It seems pointless, to me, using cocos which is a "for idiots" product, when you can just go ahead and use Corona, which is a "for idiots" product but stupendously easier to use, 1000x more solid, and probably literally 10x faster to finish your product and start making money.)
Noel Summary:
So in some sense using Unity3D (and hence, the actual nvidia physics on your computer's chips) is the ultimate solution if you want detailed nook-and-cranny collisions. Going down one step, Chipmunk is exactly, precisely what you should be using on the ifone/ipad for 2D physics -- it is precisely what is used in all the famous games we know so well. You have a bit of learning to do so hop to it - it's superfun. Finally go right ahead and just make your current model more complicated if you wish - roll your own by adding more rectangles!
And the fourth point is, be sure to remember that in games, astonishingly, you can often get away with remarkably simple physics (often SIMPLER!! than one rectangle - just a damn point - ie, simply measuring the distance between centers!) Fifthly after going to all the effort of testing more detailed physics, you would play test one against each other, and find out what is the simplest physics you can get away with.