Thursday, January 1, 2015

Reading lines of input in Java

I looked this up for a question which was posted on Facebook, and thought it was worth sharing more widely.

Why is there no System.in.readLine()?

The difficulty of getting a line of input in Java is because of Java's weird asymmetries between input and output:  System.out is an OutputStream wrapped in a feature-adding PrintStream, but System.in is a raw InputStream which has no features.  I suspect two reasons for this: caution against unwanted character decoding on input, and caution against dividing up inadequately buffered input.  The result is that to divide input from System.in into lines you have to wrap it in some other object more sophisticated than a PrintStream.  If the only division you need is lines, Scanner provides way more than you need.  The least-overkill option I've found is BufferedReader, which adds character decoding (like all Readers) and buffering in addition to .readLine().  It's possible that the buffering will cause timing issues that can be avoided with Scanner, but if timing is really critical you probably shouldn't be using Java.  Unfortunately, all of Java's InputStream and Reader classes are blocking, and you can't kill or interrupt threads which are blocking on input, so if you want either non-blocking or multiple threads you must write your own – even then you have to wrap in a InputStreamReader to get the ready() method that allows reading of one character without blocking.

What about System.console.readLine()?

The documentation for Console is not very clear, but it seems to be intended for interaction with something like a POSIX Terminal Interface, which is often called a "console" for historical reasons.  Originally, any terminal interface had direct control of the keyboard and screen, which would be in text mode; the same interface in a window is usually called a "virtual terminal" or "virtual console," terms which also have multiple meanings for historical reasons.  By default, when a program is run in a terminal interface, the standard input (System.in), standard output (System.out), and standard error (System.err) streams are connected to the terminal, so input comes from the keyboard and output goes to the screen (or window).  All command-line interpreters allow all of these streams to be redirected, so that the output of one command can be the input of another command without using the terminal interface in between.  It seems that Console is intended to allow access to a console interface only in cases where the standard I/O streams are not redirected.  (Even then, it does not go as far as to provide any of the more modern features, such as detecting the window width.)  If you need to handle redirected I/O, or need full access to the terminal interface, System.console() is not suitable.

What about the Java Console?

The Java Console has nothing to do with either Console or a POSIX Terminal Interface, it is a way to see error and debug messages from applets or web apps.  It does not provide any mechanism for input, and should only be used for development.  "Java Console is a simple debugging aid that redirects any System.out and System.err to the console window.  It is available for applets running with Java Plug-in and applications running with Java Web Start."  (More about those contexts at Applet or Java Web Start Application? in the Java Rich Internet Applications Guide.)

Tuesday, August 13, 2013

Published!

My undergraduate honors thesis, A Universal Framework for (nearly) Arbitrary Dynamic Languages, has been published online in the GSU Digital Archive.

Saturday, November 24, 2012

Overcoming Address Space Limitations

The question overcoming the mathematical hardware RAM limit of 64bit operating systems? was closed while I was composing my answer.  I still think my answer is worth sharing, so I'm posting it here.  If anyone is able to share this answer with the user who asked, please do. 



(The question):

I read that the limit RAM memory for a 32-bit OS is 4GB, whereas for a 64-bit machine it seems to be 16TB (or is it 16EB?). Are OSes like Linux planning ways to overcome this 64-bit limitation in similar ways to Physical Address Extension (PAE) for 32-bit processors? My googling was unsuccessful.




(My answer:)

The short answer is no, because it won't be useful for the foreseeable future, and by the time it is everything will have changed anyway.  The long answer is, well, long...



Tuesday, June 7, 2011

I Entered a Contest

Go vote for my entry!

It's based on the project I did for CSC 4320 Operating Systems this spring.  (Nevermind that it's not really operating system related.  Apparently that wasn't the point.)  I couldn't cram anywhere near what I know from the project into the 500-word limit for the contest, so at some point I hope to add to this post links to the proposal, presentation, and paper generated for the class.  If you want to understand the rest of this post, it would help to go read my entry first.



I would specifically use a GPU (rather than some other processor) because the overwhelming majority of the necessary computation would consist of manipulating a 3D state matrix, and matrix operations are basically what GPUs do.  I specify "mobile" because the data scale suggests that the computing power would be sufficient, and to minimize both cost and power use.  Unfortunately, the encoding of the state - particularly, of the response profile - is not clear (I found no indication that the necessary mathematical techniques have been developed), so there's some possibility that the estimated data scale is way off.

In addition to missing a response profile encoding, I also didn't find any indication of a standard for describing the intended immediate action of an actuator.  In particular, the response calculated based on the state and response profile must be translated into analog control signals for the particular actuator mechanism; the missing standard would describe how the immediate action is encoded from the decision-making component to the component that generates the analog control signals.  (This standard could be applied to nearly any force-feedback device; it's a little surprising that it doesn't seem to exist.)

The best candidate for sensor-actuator mechanism seems to be amplified piezoelectric, where the same physical mechanism serves as both sensor and actuator.  (All other mechanisms I found actually were two separate mechanisms; piezoelectrics shift some complexity away from the physical mechanism and into the signal handling.)  Unamplified piezoelectrics are much more durable, and probably much cheaper, but we haven't yet made one with anywhere near millimeter range - piezoelectric actuators are mostly used in sub-nanometer ranges, for example in adaptive optics and inkjet flow control.  (Piezoelectric force-sensors are used in electronic scales, including the Wii Fit board, and are the basis of the common small accelerometers used in many gadgets - they're essentially a small weight surrounded by sensors.)

The 2mm square footprint is slightly smaller than the smallest amplified piezoelectric actuator I found, which has an actuation range of only 1mm.  It appears that sensor-actuators are typically packaged individually, so cost-effective production of arrays my not be possible today.  It's not clear that a device with the behavior I imagine is possible at all with current technology, much less affordably.  There is a ton of active development on the nano and micro scales, and quite a bit on macro scales, but there doesn't seem to be much going on in the milli scale.

I haven't mentioned this anywhere else, because it's not clear if it would be useful, but an additional capacitive-multitouch sensing layer could be used.  Piezoelectric sensors are generally high-force tiny-displacement, so I expect they would be unable to detect the tiny force from just touching the surface.  The higher the detection threshold force is, the more useful a separate contact detection mechanism would be, and it could be useful even with a very low threshold.  But it would add complexity to the physical device, and perhaps moreso to the response profile encoding.

There is also a ton of research into the sensitivity of human fingertips to certain kinds of interactions.  The variety is amazing, but somehow I couldn't find anything that seemed to directly address what actuator dimensions would be necessary for this interface to be effective.  The largest subset seemed to be investigating the limits of our ability to distinguish small fixed texture patterns (not dynamic, nor large enough to permit edge detection), followed by perceptions of vibrations.  Very little about the sort of informative tactile response you expect from an ordinary keyboard.  So I wasn't able to get any clear idea of what actuator dimensions would be necessary, and I don't see any way to figure that out without making a few very expensive prototypes.

So, in summary:
  • A general force-feedback response profile description needs to be defined.
  • Embedded processors capable of computing responses are already affordable, assuming the response profile description is sufficiently concise.
  • A general signaling protocol between high- and low-level control mechanisms needs to be defined.
  • Sensor-actuator elements would need to be improved significantly; some research would be necessary to determine the necessary dimensions.
  • Didn't mention it here, but display layer technology would need to be improved a small amount.

If you haven't already, now would be a good time to go vote for my entry.  Thanks!