A little off-topic, maybe, but this New Scientist piece about human echolocation is pretty fascinating. The author, Daniel Kish, lost his sense of sight as an infant but still taught himself to "see" the world around him by making bat-like clicking sounds and listening to the echoes:
Clicking my tongue quickly and scanning with my head, I move cautiously forward, catching fleeting images of bodies darting hither and thither. I follow spaces that are clear, avoiding clusters of bodies, keeping my distance from bouncing balls. I am not afraid—to me, this is a puzzle. I turn my head and click over my shoulder. I can still hear the wall of the building. As long as I can hear that, I can find my way back. ...
Echoes can be used to perceive three characteristics of objects: where they are, their general size and shape and, to some extent, what they are like - solid versus sparse, sound-reflective versus sound-absorbent. This allows the brain to create an image of the environment. ...
Passive sonar that relies on incidental noises such as footsteps produces relatively vague images. Active sonar, in which a noise such as a tongue click is produced specifically to generate echoes, is much more precise. My colleagues and I use the term FlashSonar for active sonar, because for us each click is similar to the brief glimpse of the surroundings sighted people get when a camera flash goes off in the dark.
There's also a little "echolocation for beginners" guide to doing this yourself, though I couldn't get it to work. (i.e.: "Try going to a corner of a room, and hear how your voice sounds hollow when you're facing the corner. How does the sound change when you face away from it?") Still, neat trick.
--Bradford Plumer