Nuffnang

Monday 18 May 2015

Can we convert video images into sound?

Most of us know that sound comes from vibrations. These vibrations create sound waves which move through mediums such as air and water before reaching our ears.But, can we recreate sound from video images? Abe Davis takes one step further to proof his method to these hidden properties. A software called "the motion microscope" finds the subtle motion of a video and amplifies them so that it become large enough for us to see. Thus, it can be used as touch sensor to monitor breath and pulse. The same concept can be applied to what we hear. The motion is used to capture the vibration of sound and turn everything that we see into microphone. Watch the video below:



Reference:
Abe Davis, Michael Rubinstein, Neal Wadhwa, Gautham Mysore, Fredo Durand,William T. Freeman "The Visual Microphone: Passive Recovery of Sound from Video", ACM Transactions on Graphics (Proc. SIGGRAPH, 2014,   Volume 33, Number 4, Pages 79:1-79:10

Tuesday 5 May 2015

The Internet of Things New Direction




The IoT is advancing and beginning to visualize a very different world. System architects think that it could change change Things, data centers, and even the Internet. The sensor selection can be one thing to watch since a single camera may be able to collect more data, more reliably, than a swarm of simple sensors. If camera is chosen, should we move the raw data up to the cloud for analysis? Or do they design-in substantial computing power close to the sensors? If your algorithm is highly intolerant of latency, you have no choice but to rely on local computing by having hardware accelerators such as GPU, FPGA, or Xeon-Phi chips. But if you can tolerate some latency between sensor input and system response, the question becomes how much, and with how much variation. Here, the limitations of the Internet will become an issue.