It's based on the project I did for CSC 4320 Operating Systems this spring. (Nevermind that it's not really operating system related. Apparently that wasn't the point.) I couldn't cram anywhere near what I know from the project into the 500-word limit for the contest, so at some point I hope to add to this post links to the proposal, presentation, and paper generated for the class. If you want to understand the rest of this post, it would help to go read my entry first.
I would specifically use a GPU (rather than some other processor) because the overwhelming majority of the necessary computation would consist of manipulating a 3D state matrix, and matrix operations are basically what GPUs do. I specify "mobile" because the data scale suggests that the computing power would be sufficient, and to minimize both cost and power use. Unfortunately, the encoding of the state - particularly, of the response profile - is not clear (I found no indication that the necessary mathematical techniques have been developed), so there's some possibility that the estimated data scale is way off.
In addition to missing a response profile encoding, I also didn't find any indication of a standard for describing the intended immediate action of an actuator. In particular, the response calculated based on the state and response profile must be translated into analog control signals for the particular actuator mechanism; the missing standard would describe how the immediate action is encoded from the decision-making component to the component that generates the analog control signals. (This standard could be applied to nearly any force-feedback device; it's a little surprising that it doesn't seem to exist.)
The best candidate for sensor-actuator mechanism seems to be amplified piezoelectric, where the same physical mechanism serves as both sensor and actuator. (All other mechanisms I found actually were two separate mechanisms; piezoelectrics shift some complexity away from the physical mechanism and into the signal handling.) Unamplified piezoelectrics are much more durable, and probably much cheaper, but we haven't yet made one with anywhere near millimeter range - piezoelectric actuators are mostly used in sub-nanometer ranges, for example in adaptive optics and inkjet flow control. (Piezoelectric force-sensors are used in electronic scales, including the Wii Fit board, and are the basis of the common small accelerometers used in many gadgets - they're essentially a small weight surrounded by sensors.)
The 2mm square footprint is slightly smaller than the smallest amplified piezoelectric actuator I found, which has an actuation range of only 1mm. It appears that sensor-actuators are typically packaged individually, so cost-effective production of arrays my not be possible today. It's not clear that a device with the behavior I imagine is possible at all with current technology, much less affordably. There is a ton of active development on the nano and micro scales, and quite a bit on macro scales, but there doesn't seem to be much going on in the milli scale.
I haven't mentioned this anywhere else, because it's not clear if it would be useful, but an additional capacitive-multitouch sensing layer could be used. Piezoelectric sensors are generally high-force tiny-displacement, so I expect they would be unable to detect the tiny force from just touching the surface. The higher the detection threshold force is, the more useful a separate contact detection mechanism would be, and it could be useful even with a very low threshold. But it would add complexity to the physical device, and perhaps moreso to the response profile encoding.
There is also a ton of research into the sensitivity of human fingertips to certain kinds of interactions. The variety is amazing, but somehow I couldn't find anything that seemed to directly address what actuator dimensions would be necessary for this interface to be effective. The largest subset seemed to be investigating the limits of our ability to distinguish small fixed texture patterns (not dynamic, nor large enough to permit edge detection), followed by perceptions of vibrations. Very little about the sort of informative tactile response you expect from an ordinary keyboard. So I wasn't able to get any clear idea of what actuator dimensions would be necessary, and I don't see any way to figure that out without making a few very expensive prototypes.
So, in summary:
- A general force-feedback response profile description needs to be defined.
- Embedded processors capable of computing responses are already affordable, assuming the response profile description is sufficiently concise.
- A general signaling protocol between high- and low-level control mechanisms needs to be defined.
- Sensor-actuator elements would need to be improved significantly; some research would be necessary to determine the necessary dimensions.
- Didn't mention it here, but display layer technology would need to be improved a small amount.
If you haven't already, now would be a good time to go vote for my entry. Thanks!
Now there's a TEDxCERN talk about a fairly similar idea: Shape-shifting tech will change work as we know it
ReplyDeleteI've posted about it on my Tumblr.
My project proposal, paper, and presentation are now online.
ReplyDelete