Monday, October 21, 2013

Completed project

I have finally been able to complete the main necessities of my project. It is now able to track a person's hands, mask the webcam to that data, record a video of that masking and playback that video straight after. This has taken longer than I expected but also has helped me pick up some very important lessons in programming. Speaking of which, through making this project I can now say I am sufficient at both using Isadora and openFrameworks. This learning has been created through trial and error experiences. While I did try tutorials to learn more about the softwares I would say most of what helped me adapt was the experimentation and work needed to complete my goals.

The next steps are refining the code and fixing any minor glitches. They are:

- The speed difference between the hands being removed from tracking and the recording being rendered is too fast for the program to handle and most likely will lead to crashing the application.
- Resizing the video so the 'canvas' aspect takes up the whole screen.
- Copying the code to the Mac computer.
- Creating two windows so the hands and mini display are displayed on the computer screen while the main display is on the monitor.
- Create a mixer or opacity difference between the masked video and video playback so they intwine better.
- Any other alterations which can make the project cleaner, more intuitive and effective.

Oh and also complete my exegesis.

 
Hand tracking to video masking, recording and video playback from Matt Martin on Vimeo.

Monday, October 14, 2013

Programming update

I have been able to make a more efficient recorder of the live stage. It now records at 5mb/s (or whatever I choose) instead of 970mb/s. I had to search through some old code to find it but was able to get one that allowed me to choose the bitrate for the video encoder. I am pretty sure what I found was the original version of osxVideoRecorderExample which has this feature but I could not find it with the present one. Now my version is a mix of the two and is a lot quicker in frame rate. The last big step is to put the two programs together, let's do this. Video of the demo is below.


openFramworks live mask and video recording loop from Matt Martin on Vimeo.

Friday, October 11, 2013

Software expulsion

Over the past week I have been able to discuss with other programmers and digital artists in Melbourne who also use programming languages to manipulate and mix video. The main point I received from our discussions was by keeping solely to openFrameworks I will be able to use less use out of my computer and generate a more efficient program. I have known sticking to only one program would be better for processing video but never questioned not needing Isadora - which is a video mixing software - and replacing it for basic programming to do the video mixing and recording. The comments I received were of openFrameworks being possible of doing a large amount of anything (from a video jockey). Two weeks ago I would not have even considered this an option but I have been able to adapt to openFrameworks quickly and steadily.

With a 20 minute search looking at replicating the visual aspect of Isadora to openFrmaeworks [1, 2, 3] it appeared possible to achieve that same level of work but without needing the two different softwares playing at once. Within two days I was able to replicate most of what Isadora could do in openFrameworks. This isn't to say that learning from Isadora wasn't invaluable, it gave me the chance for quick programming to construct ideas and concepts in a short amount of time. I have however found using HD video tricky for Isadora to handle.


Video masking and recording in openFrameworks from Matt Martin on Vimeo.

The next step is combining the hand tracking code with live video masking and recording. I am planning to deliver the practical element within the next few days which will allow me more time on completing what is left of the theoretical and presentational elements of my studio project.


References

masking an image with another image - openFrameworks forum. (n.d.). Retrieved October 11, 2013, from http://forum.openframeworks.cc/index.php?topic=339.0
ofxFenster addon to handle multiple windows (rewrite) - openFrameworks forum. (n.d.). Retrieved October 11, 2013, from http://forum.openframeworks.cc/index.php/topic,6499.0.html
Processor Resources with OF and Video - openFrameworks forum. (n.d.). Retrieved October 11, 2013, from http://forum.openframeworks.cc/index.php?topic=12771.0

Friday, September 27, 2013

Revision

As I become more involved into the technical work, I have begun to reevaluate what my project is and what it could/should become. When considering the next planned step I started to realise how impractical and time consuming it would be. I have been able to get openFrameworks working as a hand and finger tracker, thanks to KinectCoreVision. This can then send data to Isadora to allow it to do its masking and video recording.


KinectCoreVision to Isadora from Matt Martin on Vimeo.

The next step was to understand how to make the computer recognise similarities between a recorded video and the live mask and then merge the two together. Not only that but to alternate between different video categories when there are different connections. So if there was a lot of a certain colour or a tree on display, all related content to that would show up on the screen, like previously recorded green trees. It would mean for video categories of different subjects and visually linking them. Colour connections would be possible but identifying objects is less plausible. It leaves me with deciding how to prioritise my time and I have decided feedback is the biggest priority. Also because an adaption of my work has been accepted for public space, I have to consider the reality of getting that working the way they want it to.

The project is now more about giving the audience a collage of everyone's chosen experience, not subjective to what is on screen. It is not of the screen acting as a mind relating what it sees with connections of what it knows. It now acts as the super experience, mixed of all performer's interactions of what they see and what they choose to include. Essentially when the performer interacts with the installation, it will play a video recording over and over while including the new mask of the performer's experience. I don't mind this except for it feeling a bit of a copout. At least now I can create a stronger, more finalised project. With feedback I can also adapt it accordingly. I also have to remember the practical is not the part to the work.

References

KinectCoreVision. (n.d.). GitHub. Retrieved September 27, 2013, from https://github.com/patriciogonzalezvivo/KinectCoreVision

Saturday, August 31, 2013

Plan for assessment

https://vimeo.com/20904879#at=0
With the mid semester assessment coming up I have begun planning what to show so far. Most of this will be the practical elements I have been testing and exploring. The real element I want to get is the feedback which has been delayed from following installation instructions for out of date programs. I will give one more try to getting KinectCoreVision with openFrameworks working on Monday, if I don't I will remake the one I have to fit with what I want even if it's not that accurate. I want to make sure I get that feedback by next week and to see a final focus point before the holidays. I am fairly sure that I can get simple concept (webcam input, simple finger detection, recording of video etc.) ready in the week. The problem lies within the time to set it up in one go and how much CPU it uses. As I can change the quality of the video Isadora records I can make sure they don't require a lot of use.

References

Patricio Gonzalez Vivo. (n.d.). Retrieved August 31, 2013, from http://patriciogonzalezvivo.com/index.php

Tuesday, August 27, 2013

Practice progression

I have lately been working on technical tasks related to my project, not that they are the most vital aspects but will give some idea to what is possible as I plan my final result/direction earlier to develop new ideas. The faster I move onto this the more time there is for the revision.

http://www.youtube.com/watch?v=NeHX5jzHFM4
One of the first issues I had towards the last semesters project was how well the interaction worked. It is vital and needs to be understandable for everyone. I want to know what my options are for this and what is possible to recreate. I have been looking for a stable and accurate way to track the hands or fingers of a person. So far the results vary, but with accuracy comes complexity. Most decent tracking systems point towards using a framework called OpenNI with a lot of other software required to perform. Unfortunately this has led to long read me files and broken links, eventually I was able to find out that even though the official website stated having a Mac beta version available that it is not actually possible because of Kinect SDK being required (Kinect SDK does not work on Mac. Now I am back to finding other solutions or alternatives. Right now I have found some more videos using Processing with the Kinect which look to be promising.

The other, less probable task was looking for a way to rerecord video from the stages in Isadora. What looked almost impossible (because of the dependency of the files loading to the media section in Isadora to be usable in Isadora) actually already included an actor to record files from video input devices connected to the laptop, such as a webcam. Unfortunately there was no plan to capture for stages. A trick around this was to make Isadora think there was a "capture" device connected. Instead using Camtwist it used the laptop display screen (including the stages) as video input for Isadora to detect and allowed for video file to be created. I might switch to something other than Camtwist as there are lots of options out there, but for now it will do. So progress.

In fact here is a video of the Isadora patch in action, using a really bad webcam.

References 

About OpenNI - 3D sensing Technology for depth sensors | OpenNI. (n.d.). Retrieved August 26, 2013, from http://www.openni.org/about/
CamTwist. (n.d.). Retrieved August 26, 2013, from http://camtwiststudio.com/
Download the Kinect SDK & Developer Toolkit | Kinect for Windows. (n.d.). Retrieved August 26, 2013, from http://www.microsoft.com/en-us/kinectforwindows/develop/developer-downloads.aspx
Installing OpenNI, NITE  and SensorKinect for Mac OS X. (n.d.). Retrieved August 26, 2013, from http://developkinect.com/resource/mac-os-x/install-openni-nite-and-sensorkinect-mac-os-x
Kinect Hand Processing - Gestures. (2012). Retrieved from http://www.youtube.com/watch?v=NeHX5jzHFM4&feature=youtube_gdata_player
record stage - TroikaTronix Forum. (n.d.). Retrieved August 26, 2013, from http://troikatronix.com/troikatronixforum/discussion/790/record-stage/p1
Setting up the Kinect on OSX 10.8 (Mountain Lion)- Coding Color. (n.d.). Retrieved August 26, 2013, from http://www.codingcolor.com/featured-articles/set-up-kinect-on-osx-10-8-mountain-lion/

Tuesday, August 06, 2013

Continue of project

Two weeks into the semester and I'm getting back into the flow of things. So far I have not made groundbreaking development which is why I have not posted recently. I am aware though that reflecting on what is not progressing could lead to positive alternatives. That is partially why I am making this post. The other point is to illustrate the direction I am heading towards. I already have some leeway to getting there and will look to follow through with this.

http://www.dvice.com/2013-5-13/nike-installation-turns-your-body-animated-digital-art
So I am trying to get into the creating aspect fairly quickly. I am doing this by exploring similar works and tutorials to get myself into a practical way of making. From completing the basic Isadora tutorials I understand more on its capabilities and generate ideas through that. So far I am thinking along the lines of live video interactions, heading away from cameras and more towards expressive technology. I want the performers to choose their experience input to the art piece. I will still focus on the art piece being the one with the experience, which is dependant on the performers direction and choice of perspective. The next few days will be about testing live video and the possibility of capturing live video.

I found an article with a different perspective towards how technology is used for experience. Stating technology that is used to express an experience is actually supplementing the real experience (example given was of online social activities strengthens physical social interaction rather than replacing it). As the person gains more knowledge and interest around an experience it will lead to the experience itself being more engaging. In terms of my project constituting around the camera as a way to express a persons experience, it would lead to a better (or different) engagement for the person who was to look at the footage and later then try to replicate the experience. This way of thinking though is becoming more about certain rules I am looking for rather than an idea to express. I have to be wary I do not become one who follows with "This is like this therefore..." statements.

References

Digital Technology and How We Learn, Interact, and Experience Our World. (n.d.). Retrieved August 6, 2013, from http://matthawkhistory.blogspot.co.nz/2013/02/digital-technology-and-how-we-learn.html
Nike installation turns your body into animated digital art. (n.d.). DVICE. Retrieved August 6, 2013, from http://www.dvice.com/2013-5-13/nike-installation-turns-your-body-animated-digital-art
Tutorials | TROIKATRONIX. (n.d.). Retrieved August 6, 2013, from http://troikatronix.com/tutorials/