mmiPosted by Jimmy 28 May, 2015 17:58:56
After deciding the subject of our project and planning all the tasks we have to do, we introduced all this element to other groups. Our presentation is attached to this post.
We remind that our project is a music keyboard which is controlled by gestures and voice. On the screen, the user can see a keyboard. He is able to interact with it by making gestures (push keys) and these gestures will generate a sound. He can use voice too in order to change instrument and rythm. Our output modalities are also sound and visual so that the user can have feedbacks for his actions.
In addition to this presentation, we start discovering technical points to develop this system. So we begin developping with Processing and the kinect sensor. Now, we are able to track a specific part of an user, like his hand, his shoulder, his head ... and we can detect some gestures, like the push gesture that interest us for our project. Moreover, we find some usesfuls sounds.
For the next week, we will continue developping and learning to use kinect's features. We will search how to implement speech recognition which is an interesting part of our project. Finally, we will search another sounds in order to offer several possibilities to users.
mmiPosted by Maria 22 May, 2015 00:23:56
This week, we decided what kind of project we would develop for the next weeks. For the brainstorming we used an distributed notepad, where we collected all kind of different ideas.
On Wednesday we tried out the kinect and voted for our best idea as a project: a virtual music keyboard.
So we did a lot of conceptual scribbeling to find out, what features and special functions this keyboard should provide, how the interaction with it should work and how the interface should look like. After collecting a lot of ideas we sorted them in a category list and build three working packages: basic, advanced and fancy.
Dependent on the given time, we would first build everything, what was in the first package and then keep on working on further implementation.
We also prioritized every feature of the first package. So we know, where we are going to start. When the first package is done, we are going to prioritize the next package.
mmiPosted by Maria 02 May, 2015 17:14:40Explanation of the „Rotating Snakes“-Illusion
This illusion comprises of the perception that the actually static circular areas around the black spots are rotating. This is only true for areas that one does not directly focus on and, interestingly, the circles also spin in different directions of rotation.
The „Rotating Snakes“-Illusion belongs to a group of visual phenomena that can be summarized as „physiological illusions“, i.e. illusions that are to be attributed to effects of excessive stimulation of a specific type (brightness, colour, size, position, tilt or movement). More concretely, the illusion is based on a variant of the „peripheral drift“, which can be defined as „an anomalous motion illusion that can be observed in peripheral vision“.
One prominent explanation building up on the human perceptual system is the peripheral-spatiotemporal-integration hypothesis (originally referring to patches with sawtooth gradients of luminance) which was proclaimed by Faubert & Herbert in 1999.
For the explanation of the hypothesis it is important to point out, that people nearly without exception perceive a sensation of movement in the direction from darker to lighter colours. The illusion also works for black and white versions, so only the existance of luminance gradients (differences in lightness), not the colours as such are of relevance.
For the explanation it is also important to bear in mind that the illusion works in several circumstances. One can look at different locations around the stimulus (i.e. move one´s gaze), or firmly look at something outside the stimulus and blink or the stimulus can be sequentially displayed at differing locations while one´s gaze is at one point. The latter aspect, by the way, shows that the perception of motion here is not triggered by efferent (=information originating from central nervous system) eye-movement signals.
Their hypothesis describes how, on the one hand, the illusion of movement starts in the first place and, on the other hand, how it is maintained. In a nutshell, the authors found three interacting factors or aspects that are important for the explanation of the phenomenon: First, transients are introduced due to eye movements or blinks, second, the processing latencies in the visual systems vary depending on luminance and third, aspects of spatiotemporal integration of the luminance signals in the periphery.
In more detail the ideas are as following. The latency of transmission of visual information varies with luminance. This means, the lighter the information the faster the transmission. Due to the differing lightness in the image (green is lighter than black, for example) there are differences in the arrival time of the information at units in the first layer. In the second layer, probably composed of units that respond to moving images, this differences in timing are integrated and result in motion signals (compare perception of movement). Further in the process the signals are integrated on even larger receptive fields. Given that in the areas of the retina that are away from the fovea the convergence of information is especially high, motion signals become stronger. This is why the illusion only works in peripheral vision.
For the illusion to be maintained the information then has to be refreshed by blinks, or, as more currenty studies indicate, micro-saccades.