Is this ever going to be possible? We have voice, image, data, etc..over ethernet. Is Touch over ethernet possible?
With the strides being made in Brain Computer Interfaces (e.g. controlling computers with nothing more than neural impulses), it's only logical to conclude that one day we'll be able to "reverse the stream" and have computers send the impulses to our brain that we can then interpret as touch, sight, hearing, taste, and smell. Who knows, it's quite possible that it has been done already and we are all dreamers in a digital world.
How's that for a mind frack?
Related
I'm researching and trying to building a RC car that can be controlled by the internet. I've started looking into how communication over the web works, but I seem to be going nowhere. My goal for the project is straight forward:
The RC car has an on-board camera and 4g wifi router that enables communication (driving commands, video streaming) over the internet. A Raspberry Pi will serve as the on-board computer.
I will be able to control the car with my PC even across the globe, as long as I'm connected.
I want to preferably do as much by myself as possible without relying too much on other people's code.
So here are my questions:
How does an application communicate over the internet? What is the interface between the application's logic (e.g pressing "w" to go forward), and transmitting/receiving that command over the internet?
How is video data stream handled?
I've looked into WebRTC and WebSockets for communication, but they are aimed at providing real time communication to web browsers and mobile, not something like a raspberry pi, and I'm still in the blind as for exactly what technology should I use, and in general the overview and architecture of real time communication.
All I've achieved so far was an app that sends text messages between devices through a server on my network, with very primitive reading/writing using java Socket.
In short, what does messenger/skype/zoom do in the background when you send a message or video call?
Any guidance would be greatly appreciated.
First things first. You cannot do real-time control over Internet, period. There is absolutely no way to guarantee the delivery latency. Your control commands can arrive with a delay from milliseconds to seconds, or never. No way around it.
Now, you can still do a number of reasonable steps to absorb that unpredictable latency as much as possible and safe-guard your remote robot from the consequences of the unreliable communication.
For example, instead of sending the drive commands directly - as in, acceleration, deceleration, turn angle, etc., you can send a projected trajectory that is calculated from your drive commands locally on a model. Your RC car must be sufficiently smart to do some form of localisation - at the very least, wheel odometry, and with a good enough time sync between the sender and the RC car you'll be able to control the behaviour remotely without nasty consequences of drive commands executed at an unpredictable delay.
You can add a heart-beat to your protocol, to monitor the quality of the communication line, and if hear-beat is delayed or missing, initiate emergency stop.
Also, don't bother with TCP, use UDP only and maintain your own sequence counter to monitor missing packets. Same applies to the telemetry stream, not just command channel.
I plan to extend the range of my Wifi with my Notebook. - My question to this, is it possible to build a wireless repeater with only one NIC? or do I really need at least two NICs, one for being logged in and receiving the packets and the other for extending the WiFi/Signal. - Actually, what I wanna do is, using my laptop as a WiFi-Repeater, but only with the built-in NIC, no second one.
I've searched the net already but found nothing about the functionality of a WiFi-Repeater and if they have two NICs integrated.
Hope you guys can enlight me ;)
EDIT(added schemes):
Possibility A
Possibility B
What can be achieved with an AP capable Chip/Firmware, for instance, the Ath9k.
You can't turn laptop's WiFi into range extender, since I believe it requires a special WiFi chip firmware and a special configuration of antenna(s).
However, you might try to look on the internet if WiFi chip you have supports AP mode in firmware (not all manufacturers provides that), and if yes, you can set up the access point with the same SSID. In this case your WiFi clients will roam from one AP to another. Of course, this kind of setup requires Ethernet cable attached to your laptop.
I am about to start on a huge new project which will rely on the use of an Arduino connected to third party electronics (in this case an electromyography board I have already built).
I have a good idea of how to transmit data between the Arduino and the iPhone or iPad using protocols like OSC and an Ethernet shield.
What I am hoping to achieve is to effectively analyse an incoming analog signal and recognise the gestures employed to create that signal. So what I am essentially talking about is waveform analysis, whether it is on the iPhone side or on the Arduino side. Are there any libraries or previous methods of analysing or recognising gestures? This is going to be a large research project, so I am really looking for a push in the right direction.
This, I understand, might be a vague question so if anybody wants me to shed some light on the matter I will be more than happy.
Is there is a way to determine an iPhone's exact location (indoors, and to a distance of just a couple of feet) via use of radio/antenna's or some other infrastructure located around premises (i.e a hospital, shopping mall, school). Will appreciate any ideas/direction (technologies, research) as for how to overcome this limitation.
If you mean for an area you have control over (setting up a location network for a specific school/hospital) as opposed to generic location, you'd be able to triangulate your position based on wifi signal power for APs with known locations.
If you wanted it to be a generic solution, and you know there would be multiple APs in/around the buildings you wanted, you could triangulate all wifi signals while you have GPS outside the building, and then reference those locations when you lose gps accuracy. The first part is something that many wardriving applications already do.
Here's an article describing a commercial technology for this purpose in high-level detail: link
And here's a link to a SO page where people have started discussing possible methodologies: link
Use the GPS and hope that you got good coverage.
Other than this, you can deploy several wifi hotspots that can measure the signal strength for each packet and do a triangulation to calculate the iPhone position with regards to three or more of these hotspots based on the signal strength each of them measured.
A quick search for "signal triangulation" on the internet reveals a Wi-Fi Based Real-Time Location Tracking technology from Cisco. I have not used it, so I can't vouch for it; and I suspect it's rather expensive. There might be other solutions as well.
The alternative would be to buy several wifi routers or access points and flash them with your own version of the firmware. You can probably use OpenWRT or DD-WRT as a base for this.
I was wondering what amount of time is required to convey information regarding the tilt and position (not gps) of one particular iphone to another. Could 2 iphones send and receive this information simultaneously? What about 3 iphones? I'm interested in an application that is able to simultaneously send and receive and make conditional decisions based on this information received all within a half a second-ish.
Any shot this is possible? If so, is bluetooth or wifi better?
Thanks a ton,
Jake
This is currently not possible without an intermediate server. (Without a jailbreak, which would make it possible, but extremely difficult)
I'm assuming your purpose is gaming, in which case, the latency associated with a trip to a server and back over a cellular data network, is likely to take too long for any satisfactory gaming experience. I don't believe it would be within half a second.
This will be possible via Bluetooth in the upcoming 3.0 iPhone software, but that is still under NDA, so you are not likely to be able to get any reliable performance numbers until it is released. If I were guessing, I would certainly guess that the latency associated with a direct Bluetooth connection would be FAR under half a second.
All you've got as an option right now is Wi-Fi or the Cell Network. If you use Bonjour over Wi-Fi, you'd have latencies in the milliseconds, but all the phones would have to be connected to the same access point. Take a look at the WiTap example.
It is definitely possible, you'd want to connect your peers over WiFi for best performance and reliability, but Bluetooth would be ok as long as your data packets were constrained to small sizes (< 1k). Check out this documentation and sample code to see how to access UIAccelerometer:
http://developer.apple.com/iphone/library/documentation/UIKit/Reference/UIAccelerometer_Class/Reference/UIAccelerometer.html
http://developer.apple.com/iphone/library/samplecode/AccelerometerGraph/index.html#//apple_ref/doc/uid/DTS40007410
The trick is that the update frequency is controlled in part by the systems needs, so there may be a window (while the system is attempting to update device orientation) wherein your application receives no updates.