Friday, June 7, 2013

Individual Contribution

At the end of this project, Geriambience has created a system that allows a user to interact with a virtual bathroom using the skeletal tracking functions of the Microsoft Kinect. The bathroom has been carefully modeled to replicate a real bathroom and uses Caroma Bathroom products, as they are the sponsor for this project. The Kinect tracks the user and represents them within the virtual bathroom, which demonstrates visually and in text form how the bathroom is reacting to the users actions.
While our group had a somewhat dis-cohesive start, as the project progressed we came together to develop a fully functioning product. Our final result not only represents the conceptual elements of the potential bathroom, but provides a basic framework on which a more fully featured and physically realized project could be constructed.
This blog post details my personal involvement in the project and an assessment of my development and contributions.


Contributions:


Once we split the group into two teams; the programming team (Steven and I) and the bathroom design team (Laura, Dan, and Siyan), we began looking at how we where going to approach this project. There where effectively two stages of development that I was involved in, the Kinect Movement System phase, and the Kinect Interactivity phase.

Kinect Movement System:
In the initial brief, we had planned to develop a system through which the Kinect could be physically moved to allow us to effectively track people through a bathroom space. After a few concepts, we decided to imitate the "Skycam" system, where a camera is mounted on four wires which are wound onto winches at each corner of the room, meaning the Kinect would be able to move anywhere in the room. We also included a rotating joint on the system so the Kinect could rotate to track a user in corners or other hard-to-reach places. After some preliminary designs, I came up with this plan:













This worked using two separate sections, the top plate, which held the four wires and the stepper motor, and the bottom plate, which held the Kinect sensor. These where connected by four bolts that ran in a groove on the bottom plate, allowing the stepper motor to spin the bottom plate.
We got to the prototyping stage and created the following prototype:
















Unfortunately, once we reached this stage we realized that to develop this further would take at least the rest of the semester, since the prototype had a number of flaws such as a lack of smoothness when turning, and the Arduino implementation to move the wires required large numbers of parts to be shipped.
We decided that instead of continuing to focus on this single element, we would begin work on the Kinect Interactivity and order a 2-Axis powered joint which we would hopefully be able to implement much quicker than this system. Unfortunately, that joint never arrived, but we discovered that, for testing purposes, a non-mobile Kinect gave us sufficient coverage of an area, provided we positioned it correctly.

Kinect Interactivity:
When we moved on to the Kinect Interactivity phase, we started by approaching Stephen Davey and getting a copy of his CryEngine 3 code, since he'd previously created some Kinect integration, and running through exactly what it did with him. Thankfully I had some previous experience with both the C++ programming language that the code was written in, and the Visual Studio Integrated Development Environment that was used to edit it, so Stephen was able to explain the code to me and Steven at a higher level than would otherwise be possible.
Once we had a basic understanding of how he'd tied the Kinect SDK to the CryEngine 3 code, we began building our own flowgraph node to suit our needs. Here is an image of the final node:




















We began by creating an output for the hip joint, so we could begin tracking the players rough location. Within CryEngine, we created a sphere that would constantly move to the vector provided by the Kinect, so we could visualize the players location. This revealed to us that the Kinect uses a different orientation to CryEngine, so depth from the Kinect was height in CryEngine. This can be seen in this video:

Thankfully this problem was easily fixed, since we just needed to switch the variables within the vector around. Here is a timelapsed screen-capture during the Kinect coding:



Once we had this working, we expanded upon it by creating outputs for each of the joints as vectors, so we could visualize the user within the environment. We used these vectors to position spheres at each of the joints, with the following results:

From here we looked back at the interactions we had initially set out to create in the back brief, to see how best we could implement them.

The following video is a timelapse of the interaction development:


 A lot of the interactions where based on a users hand or whole body being near an object, such as a sink or a shower, so we created a system that would react to the users hand being near a virtual object, in this case by re-sizing it:


There was also the toilet height adjustment, which tracked the users bone length and set the toilet to the appropriate height. We created a test implementation of this using a horizontal door to simulate a seat:


We then began looking at how you could control shower temperature. Stephen Davey had given us a basic understanding of how to create custom gesture recognition using the code, so we came up with a simple gesture that we could use to raise or lower a variable without accidentally activating it. It effectively involved the user holding their right arm at a horizontal angle, then raising or lowering their left arm to raise or lower the variable, in this case water temperature. We represented that variable in the initial tests by re-sizing a ball:


Once we where comfortable with creating interactions, we moved on to implementing them with the bathroom created by the other team.
The other team had usefully already created some basic flowgraph interactions, such as turning lights on and off and moving the toilet up and down.
The various interactions have been covered in the previous posts, but this video demonstrates all of them, with basic explanations of each:




Here is a time-lapse of some of the bathroom interaction being implemented:
There where a number of challenges when creating these interactions, especially when doing the context sensitive ones, like the toilet and sink movement. Flowgraph's real-time style of operations makes implementing classical programming methods, like loops and conditions, somewhat more complicated. Thankfully we both had some previous experience with flowgraph, so we where eventually able to come up with effective solutions.
An overview of the final flowgraph can be seen here:










The final interactivity was quite effective, and left me with a greater understanding of the challenges and possibilities of interfacing hardware and software. I feel there are a lot of opportunities for this system to be expanded and refined, especially if the Kinect could be replaced with a dedicated piece of hardware that could be mobilized to physically track the user and be less intrusive.



Individual Development:

During this project I learnt plenty of new things, some I expected and others came as something of a surprise. The main areas I developed in where:

Mechanical Design:
I spent the first few weeks designing the Kinect movement system. I've made plenty of 3d models for this course previously, some of which have even been physically realised, but I've never had to create something that has any features beyond the aesthetic. This made creating the movement system an interesting challenge. It took me a while to even figure out how to make it work, since we couldn't simply hang the heavy Kinect hardware off an inverted stepper motor, so I managed to design a system that would let the motor spin the Kinect without having to support it. In retrospect, I would've been better off focusing on a pre-built system, but after some quick initial research had nothing appropriate I decided it would be more effective to develop my own system. I designed it to use laser cut pieces, so it could be relatively easily assembled and simple to prototype. Unfortunately the hardware, using nuts, bolts, u-bolts and split-pins, was more expensive than I thought and more difficult to assemble.
Through this process I learnt the importance of careful measurement and planning ahead, as well as thoroughly exploring other options before dedicating yourself to a single choice.

Interaction Development:
Developing the Kinect interaction was something I'd never really done before. I'd done some Kinect development for the HyperSurface installation at last years Sydney Architecture Festival, but it was at a very abstract level and focused on representing the user, rather than how the user interacted with the wall.
For this project, I had to explore how people could interact with the Kinect system in an intuitive fashion. As the main interaction tester (Since Steven preferred not to be on camera) I had to think about not only whether the system felt natural to me, but how it would feel to an older person, people of different heights and whether the interaction would be easy to explain to someone.
The maths and logic controlling the light brightness.
I also had to decide, with Steven, what the best way to implement these interactions was. Mathematically, I learnt some interesting new concepts, like 3-dimensional trigonometry to check whether a users hands where in the correct positions relative to their body, and some more logic operations to control the flow of data through the flowgraph. This was probably the most interesting part of the project, trying to think about how natural human movement can be translated into a mathematically defined algorithm, while not being so specific that it becomes difficult to make the correct movement.

Intellectual Property:
Another area our group, as well as me personally, engaged with was Intellectual Property, for the group presentation in week 8. My particular area of research was into what intellectual property our project was creating and how we should protect it. Having never really looked into this subject before, I was very interested to learn about the variety of types of IP that existed and the different uses for each one. The importance of protecting my ideas and creations had never occurred to me, probably because I've never tried to commercialize them, but the research I did showed me how not protecting your ideas can mean you risk losing any potential benefit at all from your work.

Collaboration:
Collaboration is something I had some experience with previously on other group projects, especially the twenty-person group project for studio last semester. For this class, there where two levels of collaboration, with Steven Best, within the programming team, and with Laura, Dan, and Siyan in the Design team. Steven and I had worked on a number of projects together previously, though this project was a bit different, as it essentially required us to work on a single computer. This was due to the CryEngine programming being difficult to split between two computers, as well as the fact that we only had access to one Kinect. This meant we spent a few days each week coming onto campus and working together on the project. This proved to be a very effective way of collaborating, since whenever one person had a problem, the other person could provide assistance. It was also useful for this project in particular, since we where constantly testing with the Kinect. This would be quite difficult by yourself, as you'd constantly need to be going back and forth between adjusting things in the flowgraph and interacting with it with the Kinect. With two people, one could be interacting with the Kinect, while the other stayed at the laptop and made adjustments based on what was happening. This sped up the development of the various interactions greatly, so that we could refine them and ensure they worked consistently.
With the design team, we where somewhat disconnected for the first half of the project, since the design of the movement system and the design of the bathrooms did not require much collaboration. Once we moved on to the Kinect interaction, however, we began to work more closely together. The design team provided updates each week showing us how the bathrooms where looking and the surrounding environments. We used these to ensure that the Kinect systems we where developing would be appropriate to the fixtures they had in place. They also provided us with a set of basic interactions done using the keyboard, that we where able to modify and integrate with the Kinect sensor to function correctly.
They provided a large amount of details on the bathroom design via the wiki, such as information about the various fixtures, plans of the bathrooms, and descriptions of the basic interactions. This was very useful throughout the development of the interactions, since we could already understand how the fixtures could be mobilized and what the existing flowgraph nodes did. It also helped when we where setting up the pseudo-bathroom to demonstrate the Kinect, since we already had a plan that showed us the dimensions of the bathroom and the locations of the various objects.
The other way we collaborated was via a Facebook group. This was particularly useful when organizing things like the Intellectual Property presentation as well as the times when the whole group met up, such as when setting up for the presentation. I think the important thing to realize about Facebook is that while it can be useful for some areas of collaboration, it's almost impossible to use it for others. It was excellent for collaborating on the IP presentation, since all we needed to communicate was text and images, but it would've been impossible to collaborate on the Kinect programming, since that requires large files being modified and constant testing and adjusting.
The combination of the wiki, Facebook, and face-to-face meetups allowed our group to effectively collaborate along the course of the project.


Future Considerations:

If we where to repeat this project, there are a number of things I would do differently.
The main problem was that we spent to much time initially on what should've been a small aspect of the project, the movement system. Next time, I would evaluate how long I wanted to spend on that system and then choose a kind of system that was appropriate for the time-frame. This way we would spend 1-2 weeks figuring out a movement system, most likely the 2-axis powered hinge, then work on the Kinect interactions until the part arrived. This way we could begin integrating the movement system while working on the Kinect interactions.
In a more general sense, a greater level of planning prior to starting the project would be beneficial. Having a more clearly defined target and ensuring that we lay out the steps that are required to get to that target would decrease the stress and confusion within the project, as well as providing a metric that allows us to continually evaluate how our project is progressing.


Final Summary

This project has created an effective method to connect 3d tracking technology to a virtual environment. It has also created a framework of interactivity which demonstrates how a user could potentially interact with a physical bathroom in a natural way. The project has been documented on the wiki to ensure that anyone in the future with an interest in this area has a resource they can refer to, and can contact us via the comments if they want more information. 
The project was an interesting learning experience, and allowed me to explore new areas of design I hadn't previously considered. My hope is that the elements and concepts within this project will continue on and eventually be fully realized as a commercial product that enables people to interact with their environments in an intuitive and useful fashion.

Tuesday, June 4, 2013

Week 13

This week was originally going to be the project presentation week, but due to some unfortunate problems with the CryEngine 3 software, which five of the seven groups where using, only two groups where able to present. Our group was ready to present, and even had a video to demonstrate the project if the live demonstration didn't work for some reason. However, we decided it was best to have a live demonstration of the project, and so we agreed with the other CryEngine groups to postpone it until next Tuesday.