KinectCoreVision to Isadora from Matt Martin on Vimeo.
The next step was to understand how to make the computer recognise similarities between a recorded video and the live mask and then merge the two together. Not only that but to alternate between different video categories when there are different connections. So if there was a lot of a certain colour or a tree on display, all related content to that would show up on the screen, like previously recorded green trees. It would mean for video categories of different subjects and visually linking them. Colour connections would be possible but identifying objects is less plausible. It leaves me with deciding how to prioritise my time and I have decided feedback is the biggest priority. Also because an adaption of my work has been accepted for public space, I have to consider the reality of getting that working the way they want it to.
The project is now more about giving the audience a collage of everyone's chosen experience, not subjective to what is on screen. It is not of the screen acting as a mind relating what it sees with connections of what it knows. It now acts as the super experience, mixed of all performer's interactions of what they see and what they choose to include. Essentially when the performer interacts with the installation, it will play a video recording over and over while including the new mask of the performer's experience. I don't mind this except for it feeling a bit of a copout. At least now I can create a stronger, more finalised project. With feedback I can also adapt it accordingly. I also have to remember the practical is not the part to the work.
References
KinectCoreVision. (n.d.). GitHub. Retrieved September 27, 2013, from https://github.com/patriciogonzalezvivo/KinectCoreVision
No comments:
Post a Comment