Friday, June 7, 2013

Individual Contribution

At the end of this project, Geriambience has created a system that allows a user to interact with a virtual bathroom using the skeletal tracking functions of the Microsoft Kinect. The bathroom has been carefully modeled to replicate a real bathroom and uses Caroma Bathroom products, as they are the sponsor for this project. The Kinect tracks the user and represents them within the virtual bathroom, which demonstrates visually and in text form how the bathroom is reacting to the users actions.
While our group had a somewhat dis-cohesive start, as the project progressed we came together to develop a fully functioning product. Our final result not only represents the conceptual elements of the potential bathroom, but provides a basic framework on which a more fully featured and physically realized project could be constructed.
This blog post details my personal involvement in the project and an assessment of my development and contributions.


Contributions:


Once we split the group into two teams; the programming team (Steven and I) and the bathroom design team (Laura, Dan, and Siyan), we began looking at how we where going to approach this project. There where effectively two stages of development that I was involved in, the Kinect Movement System phase, and the Kinect Interactivity phase.

Kinect Movement System:
In the initial brief, we had planned to develop a system through which the Kinect could be physically moved to allow us to effectively track people through a bathroom space. After a few concepts, we decided to imitate the "Skycam" system, where a camera is mounted on four wires which are wound onto winches at each corner of the room, meaning the Kinect would be able to move anywhere in the room. We also included a rotating joint on the system so the Kinect could rotate to track a user in corners or other hard-to-reach places. After some preliminary designs, I came up with this plan:













This worked using two separate sections, the top plate, which held the four wires and the stepper motor, and the bottom plate, which held the Kinect sensor. These where connected by four bolts that ran in a groove on the bottom plate, allowing the stepper motor to spin the bottom plate.
We got to the prototyping stage and created the following prototype:
















Unfortunately, once we reached this stage we realized that to develop this further would take at least the rest of the semester, since the prototype had a number of flaws such as a lack of smoothness when turning, and the Arduino implementation to move the wires required large numbers of parts to be shipped.
We decided that instead of continuing to focus on this single element, we would begin work on the Kinect Interactivity and order a 2-Axis powered joint which we would hopefully be able to implement much quicker than this system. Unfortunately, that joint never arrived, but we discovered that, for testing purposes, a non-mobile Kinect gave us sufficient coverage of an area, provided we positioned it correctly.

Kinect Interactivity:
When we moved on to the Kinect Interactivity phase, we started by approaching Stephen Davey and getting a copy of his CryEngine 3 code, since he'd previously created some Kinect integration, and running through exactly what it did with him. Thankfully I had some previous experience with both the C++ programming language that the code was written in, and the Visual Studio Integrated Development Environment that was used to edit it, so Stephen was able to explain the code to me and Steven at a higher level than would otherwise be possible.
Once we had a basic understanding of how he'd tied the Kinect SDK to the CryEngine 3 code, we began building our own flowgraph node to suit our needs. Here is an image of the final node:




















We began by creating an output for the hip joint, so we could begin tracking the players rough location. Within CryEngine, we created a sphere that would constantly move to the vector provided by the Kinect, so we could visualize the players location. This revealed to us that the Kinect uses a different orientation to CryEngine, so depth from the Kinect was height in CryEngine. This can be seen in this video:

Thankfully this problem was easily fixed, since we just needed to switch the variables within the vector around. Here is a timelapsed screen-capture during the Kinect coding:



Once we had this working, we expanded upon it by creating outputs for each of the joints as vectors, so we could visualize the user within the environment. We used these vectors to position spheres at each of the joints, with the following results:

From here we looked back at the interactions we had initially set out to create in the back brief, to see how best we could implement them.

The following video is a timelapse of the interaction development:


 A lot of the interactions where based on a users hand or whole body being near an object, such as a sink or a shower, so we created a system that would react to the users hand being near a virtual object, in this case by re-sizing it:


There was also the toilet height adjustment, which tracked the users bone length and set the toilet to the appropriate height. We created a test implementation of this using a horizontal door to simulate a seat:


We then began looking at how you could control shower temperature. Stephen Davey had given us a basic understanding of how to create custom gesture recognition using the code, so we came up with a simple gesture that we could use to raise or lower a variable without accidentally activating it. It effectively involved the user holding their right arm at a horizontal angle, then raising or lowering their left arm to raise or lower the variable, in this case water temperature. We represented that variable in the initial tests by re-sizing a ball:


Once we where comfortable with creating interactions, we moved on to implementing them with the bathroom created by the other team.
The other team had usefully already created some basic flowgraph interactions, such as turning lights on and off and moving the toilet up and down.
The various interactions have been covered in the previous posts, but this video demonstrates all of them, with basic explanations of each:




Here is a time-lapse of some of the bathroom interaction being implemented:
There where a number of challenges when creating these interactions, especially when doing the context sensitive ones, like the toilet and sink movement. Flowgraph's real-time style of operations makes implementing classical programming methods, like loops and conditions, somewhat more complicated. Thankfully we both had some previous experience with flowgraph, so we where eventually able to come up with effective solutions.
An overview of the final flowgraph can be seen here:










The final interactivity was quite effective, and left me with a greater understanding of the challenges and possibilities of interfacing hardware and software. I feel there are a lot of opportunities for this system to be expanded and refined, especially if the Kinect could be replaced with a dedicated piece of hardware that could be mobilized to physically track the user and be less intrusive.



Individual Development:

During this project I learnt plenty of new things, some I expected and others came as something of a surprise. The main areas I developed in where:

Mechanical Design:
I spent the first few weeks designing the Kinect movement system. I've made plenty of 3d models for this course previously, some of which have even been physically realised, but I've never had to create something that has any features beyond the aesthetic. This made creating the movement system an interesting challenge. It took me a while to even figure out how to make it work, since we couldn't simply hang the heavy Kinect hardware off an inverted stepper motor, so I managed to design a system that would let the motor spin the Kinect without having to support it. In retrospect, I would've been better off focusing on a pre-built system, but after some quick initial research had nothing appropriate I decided it would be more effective to develop my own system. I designed it to use laser cut pieces, so it could be relatively easily assembled and simple to prototype. Unfortunately the hardware, using nuts, bolts, u-bolts and split-pins, was more expensive than I thought and more difficult to assemble.
Through this process I learnt the importance of careful measurement and planning ahead, as well as thoroughly exploring other options before dedicating yourself to a single choice.

Interaction Development:
Developing the Kinect interaction was something I'd never really done before. I'd done some Kinect development for the HyperSurface installation at last years Sydney Architecture Festival, but it was at a very abstract level and focused on representing the user, rather than how the user interacted with the wall.
For this project, I had to explore how people could interact with the Kinect system in an intuitive fashion. As the main interaction tester (Since Steven preferred not to be on camera) I had to think about not only whether the system felt natural to me, but how it would feel to an older person, people of different heights and whether the interaction would be easy to explain to someone.
The maths and logic controlling the light brightness.
I also had to decide, with Steven, what the best way to implement these interactions was. Mathematically, I learnt some interesting new concepts, like 3-dimensional trigonometry to check whether a users hands where in the correct positions relative to their body, and some more logic operations to control the flow of data through the flowgraph. This was probably the most interesting part of the project, trying to think about how natural human movement can be translated into a mathematically defined algorithm, while not being so specific that it becomes difficult to make the correct movement.

Intellectual Property:
Another area our group, as well as me personally, engaged with was Intellectual Property, for the group presentation in week 8. My particular area of research was into what intellectual property our project was creating and how we should protect it. Having never really looked into this subject before, I was very interested to learn about the variety of types of IP that existed and the different uses for each one. The importance of protecting my ideas and creations had never occurred to me, probably because I've never tried to commercialize them, but the research I did showed me how not protecting your ideas can mean you risk losing any potential benefit at all from your work.

Collaboration:
Collaboration is something I had some experience with previously on other group projects, especially the twenty-person group project for studio last semester. For this class, there where two levels of collaboration, with Steven Best, within the programming team, and with Laura, Dan, and Siyan in the Design team. Steven and I had worked on a number of projects together previously, though this project was a bit different, as it essentially required us to work on a single computer. This was due to the CryEngine programming being difficult to split between two computers, as well as the fact that we only had access to one Kinect. This meant we spent a few days each week coming onto campus and working together on the project. This proved to be a very effective way of collaborating, since whenever one person had a problem, the other person could provide assistance. It was also useful for this project in particular, since we where constantly testing with the Kinect. This would be quite difficult by yourself, as you'd constantly need to be going back and forth between adjusting things in the flowgraph and interacting with it with the Kinect. With two people, one could be interacting with the Kinect, while the other stayed at the laptop and made adjustments based on what was happening. This sped up the development of the various interactions greatly, so that we could refine them and ensure they worked consistently.
With the design team, we where somewhat disconnected for the first half of the project, since the design of the movement system and the design of the bathrooms did not require much collaboration. Once we moved on to the Kinect interaction, however, we began to work more closely together. The design team provided updates each week showing us how the bathrooms where looking and the surrounding environments. We used these to ensure that the Kinect systems we where developing would be appropriate to the fixtures they had in place. They also provided us with a set of basic interactions done using the keyboard, that we where able to modify and integrate with the Kinect sensor to function correctly.
They provided a large amount of details on the bathroom design via the wiki, such as information about the various fixtures, plans of the bathrooms, and descriptions of the basic interactions. This was very useful throughout the development of the interactions, since we could already understand how the fixtures could be mobilized and what the existing flowgraph nodes did. It also helped when we where setting up the pseudo-bathroom to demonstrate the Kinect, since we already had a plan that showed us the dimensions of the bathroom and the locations of the various objects.
The other way we collaborated was via a Facebook group. This was particularly useful when organizing things like the Intellectual Property presentation as well as the times when the whole group met up, such as when setting up for the presentation. I think the important thing to realize about Facebook is that while it can be useful for some areas of collaboration, it's almost impossible to use it for others. It was excellent for collaborating on the IP presentation, since all we needed to communicate was text and images, but it would've been impossible to collaborate on the Kinect programming, since that requires large files being modified and constant testing and adjusting.
The combination of the wiki, Facebook, and face-to-face meetups allowed our group to effectively collaborate along the course of the project.


Future Considerations:

If we where to repeat this project, there are a number of things I would do differently.
The main problem was that we spent to much time initially on what should've been a small aspect of the project, the movement system. Next time, I would evaluate how long I wanted to spend on that system and then choose a kind of system that was appropriate for the time-frame. This way we would spend 1-2 weeks figuring out a movement system, most likely the 2-axis powered hinge, then work on the Kinect interactions until the part arrived. This way we could begin integrating the movement system while working on the Kinect interactions.
In a more general sense, a greater level of planning prior to starting the project would be beneficial. Having a more clearly defined target and ensuring that we lay out the steps that are required to get to that target would decrease the stress and confusion within the project, as well as providing a metric that allows us to continually evaluate how our project is progressing.


Final Summary

This project has created an effective method to connect 3d tracking technology to a virtual environment. It has also created a framework of interactivity which demonstrates how a user could potentially interact with a physical bathroom in a natural way. The project has been documented on the wiki to ensure that anyone in the future with an interest in this area has a resource they can refer to, and can contact us via the comments if they want more information. 
The project was an interesting learning experience, and allowed me to explore new areas of design I hadn't previously considered. My hope is that the elements and concepts within this project will continue on and eventually be fully realized as a commercial product that enables people to interact with their environments in an intuitive and useful fashion.

Tuesday, June 4, 2013

Week 13

This week was originally going to be the project presentation week, but due to some unfortunate problems with the CryEngine 3 software, which five of the seven groups where using, only two groups where able to present. Our group was ready to present, and even had a video to demonstrate the project if the live demonstration didn't work for some reason. However, we decided it was best to have a live demonstration of the project, and so we agreed with the other CryEngine groups to postpone it until next Tuesday.

Tuesday, May 28, 2013

Week 12

This week we refined the bathroom interactions, as well as adding a new interaction.

  • Light brightness control: We decided to add another gesture-based interaction to control the brightness of both the main light and the mirror lights. This gesture simply involves placing both hand at shoulder height and moving the apart to brighten the light, or together to dim it. The control switches between the main and mirror lights depending on where the user is standing, and once the user drops their hands, the light locks to the current brightness. 
We also corrected some inconsistencies with the falling detection and improved the smoothness of the sink and toilet movement.

We also, at Russell's suggestion, added more direct feedback on what was happening within the bathroom. This can be partially seen in the videos that have been uploaded, it effectively lets the user know what's happening. For instance, when the temperature is being changed, there is a number that indicates the current temperature. There's also text that lets the user know if the bench height is currently tracking them, and whether they're in a position to adjust the main light or the mirror light.
We also placed the "Calling Ambulance" message in the same system, but gave it a more prominent position.

In preparation for the presentation in week 13, we set up an area in the class-room to simulate the bathroom. We outlined major features like the shower and sink with masking tape on the ground, a chair for the toilet, and tables to represent the walls. We then shot the following video to comprehensively cover all the interactions, in case we where unable to get the live features to work during the presentation (This is where we filmed the feature videos for this and the previous post as well.):



For the presentation we decided that for our part, a live demonstration would be by far the most effective way to present, as well as asking Russell or Stephen to use it to see how it works. I'll be the demonstrator, while Steven talks about what I'm doing.

Thursday, May 23, 2013

Week 11

Bathroom Integration

This week was spent combining our Kinect interactivity with the bathroom models created by the other team. We began by positioning the player character within the virtual bathroom, which required some scaling and careful positioning. Once this was done, we began implementing our ideas regarding the interaction with the bathroom.

  • Lights: The first, and theoretically easiest, task was to control the lights with the presence of a user. The lights will only turn on if the user is within the room, and the mirror-lights will turn on when the user is near the mirror. The room lights turned out to be more difficult than was initially thought, since the Kinect's range was not large enough to scan the entire room while being able to sense a person leaving the room. Unfortunately this is more of a hardware problem, which is difficult to rectify given the closed nature of the Kinect hardware.
    The mirror lights work quite well, once it was realized that the distance between the mirror and the user was being measured from the corner of the mirror.
  • Temperature: The next interaction was controlling the temperature using our custom-built gesture.
    Since this had already been used to control the size of an object during initial testing, it was quite simple to use it to instead control the height of a rectangular prism in the corner of the shower, to represent the temperature change.
  • Height Control: This ended up being one of the more difficult interactions to create. Although we'd already created a "bench" that would automatically match your knee height, using this to change the toilet and sink heights was more difficult. For instance, we didn't want the toilet to constantly move around as you moved, rather it should just match your height once and lock at that position. This took some time to figure out. Another issue was only activating them when the user was close enough. This was to ensure that, for instance, the sink did not suddenly lower itself when a user sits on the toilet.
    There was also the problem of the bench going to extreme positions when the Kinect was unable to track the user, so there are now limits in place for both the bench and the toilet to ensure they maintain reasonable heights regardless of any extraneous input.
  • Fall Detection: One of the interactions we discussed very early on in the project was detecting when someone falls down. This was also a difficult thing to detect, since the Kinect cannot properly sense a person lying down, their limbs are less defined, and their angle to the camera means there are often parts of the body that can't be seen. In testing, the representation of the user would glitch and jump around, meaning our expected check of "Is the waist at the same height as the head?" would not consistently work. We developed a process which gets the general difference between the head and waist over a period of time, so even if the skeleton is glitching and jumping around, it can still detect if the person has fallen over and notify someone automatically.

Tuesday, May 21, 2013

Group Presentations Review: Remuneration

Vivid Group:

The Vivid Group did the final group presentation on the subject of remuneration.
Their presentation was inconsistent, some members presented clearly and engagingly, but, possibly as a result of the topic rather than personal choice, went in to far more detail than was useful, creating something of an overload of information, while other members seemed unsure of what they where talking about and where overly vague.
The members in the first group where surprisingly competent at audience engagement, only glancing down at their notes occasionally and speaking clearly and informatively. Those in the second group fell into the common trap of spending most of their time talking to their notes.
The Prezi presentation was well structured but had too much text on the screen, meaning it was difficult to get a clear picture of what they where saying, especially with some members talking at an excessive speed.

The written presentation was detailed but suffered from too many unexplained terms an concepts. It may have assumed to much about the knowledge of the audience, meaning that complex ideas where being explained without explaining the core ideas that they're built on.

They gave some good examples which where fairly well explained, but the examples didn't really relate to their project, so their usefulness was limited.
Their presentation didn't have many images in it, and the ones that where present where fairly generic stock photos, though this is somewhat understandable given their topic.
They did seem to have a decent understanding of the topic, but some of them seemed unsure how to present this knowledge effectively.
With that being said, it is a difficult topic to make interesting or engaging, so some leeway could perhaps be given for that, but I think a simpler presentation with more direct links to their project would've made it much more informative and engaging.

Tuesday, May 14, 2013

Group Presentations Review: Conflict

DCLD Group:

The DCLD Group where the first to present on the subject of conflict. Their presentation was detailed and complex, but suffered from the common problem of having too much text on the slides and a lack of engagement with the audience. Their oral presentation leant heavily on the reading of text from the slides or at least the screen in front of them. This meant that instead of feeling like I was being told something, it was more like I was simply being read something, which I can do on my own. There seemed to be a lack of organisation with the slides as well, as they often skipped over some slides or went backwards to previous slides.
The written component of the presentation contained plenty of details, but in my opinion consisted of more lists than was useful. Lists are good for creating a starting point, giving a basic outline of what topics you're covering, but after that it's more engaging, and easier to follow, to talk about things progressively, moving from one idea to the next smoothly without just jumping to the next point on a list. As an extension of the list problem, there was a sense of disconnectedness between each topic, with little explanation of why they where in this particular order or how one topic followed on from the previous one.
Their examples where good, especially the BIM one, but where not explained effectively, so while I got a good idea of what conflicts might arise, I didn't really understand what the potential solutions where.
They referenced most of their images, but the references where often too small to read.

Their images where decent, but consisted of a lot more text than was useful. In some cases, especially the large flowgraph, it was almost impossible to read the text, meaning the audience had only the vaguest idea how it connected to the topic.
While there was useful information in the presentation, the way it was presented gave the impression that the team didn't really have a deep understanding of the topic and where just reading what was in front of them for the first time.

Kinecting the boxes:

The “Kinecting the boxes” presentation was the second one on the topic of conflict.
Their oral presentation was better than the last groups, as they read mostly from notes rather than off the slides, though there was still a distinct lack of connection with the audience, which could've been achieved by at least looking at them regularly.
Their reading was clear, but not engaging, as if they where simply dictating the text rather than trying to explain something to an audience.
There was also a significant imbalance between the people presenting, as some presented for far longer than others. This may have been due to one person simply being more willing to talk than others, but it did lead to what felt like a less effective group dynamic.
The presentation also went substantially longer than it should've, succinct explanations are far more effective with an audience than lengthy, convoluted details.
Their written presentation was quite clearly written, though it tended on occasion to be oddly melodramatic. They suffered from the same problem as the last group by including far more lists than was useful. They did manage to maintain a reasonable level of flow between the various topics, though having some kind of overall outline would have been useful in identifying this.
Their examples where a bit unrelated, and tended to be very general, rather than project specific. While it's possible they had suffered very little conflict in their group, even theoretical examples are more useful than broad generalisations, since conflict resolution is something that should be considered on a case-by-case basis.
Their presentation was well laid out, but a lot of the images had too much text. The flowgraph was interesting, but took far too long to explain.
It was difficult to tell if they had a good understanding of the topic with so much of the information being read off cards rather than to an audience. While they had a fairly basic view of conflict, they explained it well and gave plenty of detail on the potential resolutions, though there where a few occasions of repeated content.

Monday, May 13, 2013

Week 10

Interactivity Testing:

Gestural Control Test: Scaling a ball using a specific gesture to ensure against inadvertent activation, usable for light or temperature control:

 

Proximity Test:
Using the distance between a specific joint, in this case the hand, and an object to interact with the object, usable for operating taps or opening cupboards:


Skeletal Analysis Test:
Using skeletal proportions, in this case from the leg, to control the height of an object, such as a chair or bench, to make it comfortable to use:

Arduino Testing:

Stepper motor test:

Monday, May 6, 2013

Visual Evidence of Progress

Arduino Tests:

Controlling a Hobby Servo:

Controlling a basic motor:

Controlling an LED with a potentiometer:


(Made with the assistance of Sparkfun Tutorials)

To control a stepper motor requires a specific microchip that is currently being mailed.

Kinect Progress:

Cryengine can now access skeletal positions as well as some basic gestural controls.


Full Skeletal tracking.

(Programmed with the help of Stephen Davey)

Saturday, May 4, 2013

Individual Major Milestone


Introduction:


The key feature of this project, it's in the course name, is collaboration. Unfortunately, this is an area in which our group has not been particularly successful.This project is part of a larger project under the guidance of Stephen Davey. Unfortunately he has had a large amount of other work to attend to and has not had many opportunities to provide direction, though he has been quite willing to give feedback.
I think that, in retrospect, the development of a timeline for us to work in would have helped us both assign tasks to be completed and complete them within a reasonable time frame.
However, we have made substantial progress and I hope we will achieve an interesting and innovative solution by the end of this project.
Hopefully we can still deliver an effective idea, even if it's somewhat theoretical, but it would be much more useful to have a functioning prototype.
I have made progress towards my individual milestones, however, a lack of group interaction and technical limitations have hampered my ability and enthusiasm to make more substantial developments. Hopefully our group and myself will be able to come together to create an effective project solution, but this will require a cohesive effort on the part of every member.

My personal progress has mostly been in the areas of industrial design and intellectual property. I had hoped to use this project to increase my programming prowess, particularly with the Kinect, however technical limitations have meant that I am unable to take as active a role as I would have liked in this part of the project.


Collaboration:

Our group has struggled with collaboration throughout this project due to the language barrier between its members and also the lack of a single, fundamental goal which has been present in past studios. I feel this is somewhat a result of the group formation process, since a group with a wider range of skills may have been able to better divide elements of the project and potentially communicate more effectively.
The detachment present in some members of the group has also meant that Steven, as the group leader, receives very little feedback or even basic contributions to the project, so effectively some members only action is to complete assigned work, slowly and ineffectively, and then to ignore the project. This problem is much more difficult to resolve since it stems from the attitudes of the members themselves and the only solution so far has been to not rely on any getting any input from them and simply running the project with a smaller group.


Industrial Design:

One area in which I personally have developed is Industrial Design. Having never done anything remotely similar to designing this Kinect mechanism, I was unsure where to start. Unfortunately, none of my team-members had any experience in this area either, so I effectively had to use trial-and-error to discover an effective solution. The solution I created turned out to be somewhat over-engineered, meaning it was excessively expensive and only marginally efficient. This was due to my focus on having it able to be laser cut and structurally sound. Perhaps if I had focused more on pre-built alternatives I could have procured an effective powered bearing that would have filled the same role as this design but cheaper and more effective Thankfully, we have a fall-back in the form of the 2-axis bracket, however, the amount of time spent on this is largely disproportionate to the payback. While this is useful to know, it is unfortunately too late to be of much use in this particular course.


Intellectual Property:

Due to the presentation we where required to deliver, I have become much more proficient in the area of Intellectual Property, particularly in the process of acquiring protection for the various kinds of IP that can be created. This promises to be useful in both this and future projects as most of the things we create in our course can be protected and there are ways this protection can be used to generate income as well as protect against IP theft. I also had another assignment in a different course on the subject of Intellectual Property and this presentation was of great value. There are very few areas, in fact, where a knowledge of IP would not be useful. I would say that so far it has been the most useful part of this course.


Programming:

While I had planned to focus on the programming side of this project, the lack of an available Kinect as well as the inability to even create any flowgraphs on Windows 8 means that this side of the project has rested mostly with Steven.
With that being said, we've worked collaboratively on the programming side and with Stephens help have achieved a working integration of the Kinect sensor and Cryengine 3. We hope to add the ability to track individual limbs to facilitate more intricate interactions with the various elements of the environment, though this may require delving into the Kinect SDK and exposing more of the positional joint data.

I've also worked on understanding the Arduino coding language so that, if we manage to implement it, we can control the movement of the Kinect in reaction to the location of the user.
Using the Arduino control board to move control a number of stepper motors would be the main component in moving or rotating the Kinect sensor. The stepper motors work by enabling electromagnets located around a central spiked-gear which rotate the gear a minuscule amount, down to a single degree. This gives the motor a high degree of accuracy, though at the cost of lower speed. Rotation speed, however, is not particularly important for this project, since the sensor only has to keep up with a human at walking speed, potentially even slower.
The implementation of this is quite simply, it simply requires four wires to be connected and an electric pulse sent down them in sequence to activate one of the four electromagnets.
The use of these motors is the most effective solution for tracking people, given its accuracy, and also if we manage to implement the Kinect Apparatus, to control the wire winches that move the apparatus around the room.


Conclusion:


I have made progress towards my individual milestones, however, a lack of group interaction and technical limitations have hampered my ability and enthusiasm to make more substantial developments. Hopefully our group and myself will be able to come together to create an effective project solution, but this will require a cohesive effort on the part of every member.

Wednesday, May 1, 2013

Week 8

This week the Kinect Apparatus prototype came back from the laser cutters and was assembled.
Overall I was quite pleased with the result. However, there are a number of issues with it that would need to be resolved. For instance, the brackets that hold the Kinect are substantially too large, due to an error when measuring. This means the Kinect does not sit particularly well and would have to be braced in order to not move around. The brace that stops the Kinect sliding sideways does not connect quite as well as it should, but this can be rectified by some minor filing.
The rotation system works well, but as Russell pointed out, it's rather excessive and somewhat expensive, with the whole setup costing around $80.
We are looking into cheaper alternatives, such as a pre-built hinge. Unfortunately the specific kind of apparatus we require doesn't seem to exist. Some options include table bearings, which don't have a motor and are substantially too heavy, or a motorized pan-tilt head, generally used for cameras, which tend to be too expensive and would be difficult to modify.
For the moment we plan to keep the prototype as a basic proof-of-concept and work with the 2-axis bracket that Russell has ordered.

We also gave our Intellectual Property presentation today. I think we did quite well, especially with the way we discussed it in terms of our project, which is something the other group didn't focus on. The text of our presentation can be found on our wiki.

Tuesday, April 30, 2013

Group Presentations Review: Intellectual Property

Interactive Architecture:

The first presentation on Intellectual Property was by the “Interactive Architecture” group.
Their presentation was quite well structured and had plenty of informative content. Their oral presentation varied, it was quite good when they where explaining points with their own words, demonstrating their understanding of the topic, but some of the group members tended to simply read off cards or the screen, meaning the oration was quite monotone and not very engaging.
Some members seemed quite tense and would rattle off large amounts of text with very little time to actually understand what was being said in-between, while other members where perhaps overly relaxed, meaning their sentences where vague and not particularly well structured.
The written presentation had some useful summaries of the various concepts in intellectual property, though the style in which these where presented was not very consistent, which suggests that the amount of collaboration prior to the presentation was fairly small. This was also evidenced in the presentation styles, as some members seemed far more confident or knowledgeable than others.
They had plenty of examples, but these seemed to all be external examples, with no reference to their actual project. Project-specific examples tend to be more useful than external ones since they show how you can take a product and then run through how to protect it, where as an external example tends to be finding something related to IP, then going backwards to the product, so you don't really understand the process taken to protect it. These examples where backed up by images, which where relevant and fairly well explained, if occasionally miss-scaled.
They also had a physical example, which was quite interesting and certainly engaged the audience more than images or text could.

Most of their members seemed to have a good understanding of IP, though some effort to make the presentation consistent beforehand would have been beneficial.

Sunday, April 28, 2013

Communication Presentation Review


The presentation by the “Shades of Black” group on the topic of communication was moderately well presented. There where occasions when the speaker was excessively verbose, making it sound more like they where regurgitating meaningless rhetoric rather than actually explaining their understanding of the topic. On the whole the slides where well composed, being relatively light on text and using interesting, if somewhat generic, images. The examples used where largely useful and helped to explain the topic, though the one accompanying the “integrity” topic seemed unrelated and didn't explain much, so I understand less about that topic than the others, which perhaps speaks more to the quality of the other examples rather than the lack of it in that particular one.

The topic of communication is a rather broad one, and I felt that the presentation would have been more easily understood had the scope been minimised and a small subsection of communication focused on rather than trying to encompass the whole conceptual topic.

The images where interesting, but I felt the connection to the topic being discussed was a bit flimsy and their actual relevance seemed fairly limited. While those little images of faceless models interacting are nice, they don't convey much of a message. Diagrammatic images tend to be more interesting, especially if you use them to explain your point, like the second image in “methods”.

In terms of the written content, it's well organised and the references are relevant and plentiful. There does seem to be a lack of content for the “Integrity” topic however.

The video is not particularly useful, it explains very little that couldn't be explained in text. Also, the style of content in the video varies frequently, meaning it was difficult to determine how to understand the meaning behind what was being said.

Monday, April 22, 2013

Week 7

This week the Kinect Rig prototype design was completed and sent to the laser cutters.
The 3d modelled prototype.
The laser-cutting schematics.



We also began testing the Kinect integration on Stevens computer. The tests we ran where successful, though as with all Kinect based activity, the environment can have a detrimental effect on the effectiveness of the device. This means we can begin fine-tuning the kind of interactions we planned at the start and hopefully can have them operational soon.


We began preparations for the Intellectual Property presentation we are giving next week. We split the presentation up into 5 main areas for the 5 group members, which fall into 2 categories as follows:

Using IP:

  • How to use other peoples IP.
  • What other IP do we use and how should we do it.

Creating IP:

  • Automatic IP Rights.
  • Formally Registered IP Rights.
  • IP we've created and how we'd protect it.

I chose the 5th topic, IP we've create and how we'd protect it. There are 3 main things I think should be discussed in this topic:
  1. The Kinect apparatus.
  2. The CryEngine level and models.
  3. The software written to react to the Kinect.
We plan to meet up on Friday this week to fully write up the presentation to ensure it works as a cohesive whole.



Friday, April 19, 2013

Planning Presentation Review


The presentation by the 3RDiConstruct team on the topic of Planning was an interesting report on the types and application of planning strategies and techniques.
The presentation was done using the “prezi” tool, which I felt was not used to its fullest effect, in fact, the only effect it had was to make me somewhat dizzy. The slides themselves suffered from one of the common problems with these kinds of presentations, where there is an abundance of text on the page, with a few images and diagrams here and there. The text itself was hard to read, partly due to screen size, and the images where often not entirely clear. The abundance of text meant that a large proportion of the presentation was spent simply reading text off the slides. This not only lessened any interest I had, but also gave the impression that the presenters where not particularly familiar with the content they where presenting. It also meant that instead of speaking clearly and intelligibly, there where often times when the reader would accidentally miss a word or attempt to paraphrase a long paragraph and the resulting sentence was somewhat nonsensical. I frankly prefer someone who says um and ah here and there while actually thinking about what they're going to say than someone who simply vocalizes text on a screen without thinking about the actual sentences.
The content was well split up, though the jumps between topics felt somewhat disconnected, and in general the content was of a reasonable quality. One way this could definitely have been improved is with more examples from the groups project. While examples where given, they felt like a side note. More central examples, having a base from which to explain the planning process, rather than explaining the process and trying to put little examples on each one, would have been much more useful in showing practical applications of the concepts that where described.
A few final suggestions, brevity is the heart of a good presentation. If you can explain something quickly and succinctly, not only will people listen to it, but they'll keep listening afterward.
Also, don't point out things you haven't done, it's not useful or particularly encouraging.

Tuesday, April 16, 2013

Week 6

We began working on integrating the Kinect sensor into Cryengine. Stephen Davey has already written some plugin nodes that can be used to control the character in game by moving in front of the Kinect. Whether this will be effective enough for the amount of interaction we're hoping for remains to be seen. We may need to use some more basic input, as the node he demonstrated was more tied to using the Kinect to make the character walk along by walking on the spot, where as we are more interested in tracking peoples actual movement by moving the Kinect, as well as their hand gestures.
We ran into some problems where Cryengine has not yet been updated to properly work with Windows 8. This means most of the development will have to be done on another laptop, probably Stevens, but unfortunately he wasn't here this week.
This could make development a bit slower, along with the fact that we don't have a Kinect ourselves and have to borrow them from BECU, meaning any progress has to be made while on campus.


The level creation group also presented their current progress to the group.
The level

The larger bathroom

While not yet complete, the models look quite good and follow the plans provided by Stephen quite well. The environment is nice, though perhaps a bit too detailed given that the entire focus will be on one room.

Wednesday, April 10, 2013

Week 5 - Non-Teaching

This week was a non-teaching week, but progress continued on the project. Steven and myself decided on a form for the Kinect Rig, we'll develop a design for a spider-cam like device and get feedback on it from Russell and Stephen.
I've come up with a preliminary design for a prototype:
The design still needs to be refined to make sure it can be constructed and that the motor will be able to connect, but I think this could potentially be quite effective.
It uses the heads of 4 bolts to hold up the lower section while the motor in the middle rotates it. The Kinect is connected by two laser-cut brackets and a piece stopping it from sliding sideways. The four wires are connected to the top-plate as well as the motor, meaning the Kinect can be rotated as well as moved around.

Tuesday, March 26, 2013

Week 4

This week we chose which group presentation topic we would be doing. We chose to do the Intellectual Property topic, as it's quite relevant to our project, given the amount of IP we are both using and creating. Thankfully we have a number of weeks to prepare for this presentation, which is useful in order to determine exactly what role IP plays in our project and why it's important.
There is also another group doing IP, since there are 6 groups and 5 topics. It will be interesting to see how our presentations differ and what effect our differing projects have on our perception of the topic.

We discussed with Stephen Davey and Russell Lowe where our project should be headed and how much we can feasibly get done. This is one of the more difficult parts of the project, since it's very difficult to tell what kind of problems we'll come across, especially with the Kinect integration, Stephen Davey has some code already implemented, but it could take a single day or multiple years. With this in mind we'll need to continually re-evaluate what our end goal is as we progress through this project so we don't devote too much time to a single area and end up having an incomplete project or a project with a single, disconnected feature.

We also began discussing the kind of design we wanted to use to translate the Kinect around the room. Designs we are currently evaluating are a simple two-axis hinge system that could sit in the corner of the room and simply rotate side-to-side and up and down to keep the user in range, also a Spider-cam-like system that uses 4 wires on winches to move the Kinect across the roof of the room as well as a rotating motor to turn to face the user. Another possibility is a rail-based system that can move the Kinect along a rail on the roof, and potentially throughout the house.
Each of these designs will be evaluated in terms of its potential flexibility as well as ease of construction.

Wednesday, March 20, 2013

Week 3

Back Brief:
This week we presented the back-brief, which is essentially what we have decided to do for our project and what the component tasks need to be done to complete it.


Back Brief Outline

  • Group Name: Geriambiance
  • Group Members:
    • Steven Best (leader)
    • Dan Zhang (Dan)
    • Jing Liu (Laura)
    • Mathew Kruik
    • Siyan Li (Allen)
  • Group Project: Livable Bathrooms

Project Conditions

  • Project Concept:
This project aims to create a series of smart and convenient bathroom appliances and fixtures, designed for ease of use for elderly people. A Kinect for Windows will be  situated in a corner of a bathroom in order to track resident’s movement and body reactions, allowing it to adjust bathroom devices intelligently. The height and conditions of furniture and equipment can be controlled by residents’ gestures to help with their everyday bathroom activities which could be hindered by a lack of movement.

  • Project Proposal:
In the project, a series of bathrooms will be designed and imported into CryEngine 3, a video game engine. This will be linked to a Microsoft Kinect for Windows, a motion tracking device, which allows a person to move around in a real world space and control the in game character. This allows a demonstration of how the Kinect allows the control of real world appliances and fixtures through gesture control.

  • Specific Intelligent Equipment:
    • Door:
      • Open - When residents stand in front of and face the door, the door will automatically open.
      • Close/lock - When residents enter the room, the door will automatically close and can lock with the use of a specific gesture.
    • Light:
      • Main light – open/close: When residents enter or leave the bathroom, the main lights will turn on or off accordingly
      • Mirror light – open/close: When residents enter or leave the area occupied by the mirror, lights around the mirror turn on and off accordingly. Gesture control allows for the dimming and brightening of these lights.
      • Shower light – open/close: When residents enter or leave the shower area, shower light will automatically turn on and turn off  accordingly.
    • Sink:
      •  Turn on – When users put their hands under the tap, the tap will turn on.
      •  Turn off – When users move their hands away, the tap will turn off.
      •  Up – When users lift up their palms, the water pressure increases.
      •  Down – When users move their hands down, the water pressure decreases.
    • Shower:
      • Tracking – When a resident turns the shower on, the Kinect knows where their body is in relation to the shower head, and will move the shower head to be in the right position for the body, both in height and sideways movement.
      • Temperature:
        • Up – When users lift up their palms in the shower, the water temperature increases.
        • Down – When users move their hands down in the shower, the water temperature decreases.
    • Toilet:
      • Positioning – When a resident enters the bathroom, the Kinect reads their body height and adjusts the height of the toilet to best suit their needs for sitting down.
      • Flushing – When residents stand up from the toilet, the toilet will start flush automatically.
    • Ventilation:
      • Turn on/off - When residents enter the space, the exhaust fan will turn on intelligently and when they are out of the space, the exhaust fan will turn off automatically.
    • Safety
      • Collapse Detection - The Kinect can detect when a user falls down and can alert relevant persons to assist.

Project Planning

  • Bathroom Design
    • Designers: Dan Zhang (Dan), Jing Liu (Laura), Siyan Li (Allen)
    • Design ProposalDesign a series of three bathrooms, containing all the devices to simulate a real world space. After modelling the bathrooms, this 3D space will be imported to CryEngine 3, where it will be hooked up to the Kinect. These bathrooms will encompass all of the fixtures and devices used by the elderly, including those that assist them with disabilities.
    • Design Scales: 3x2m, 3x4m, 4x6m.
    • Design Concepts:

  • Programming
    • Programmers: Steven Best, Matthew Kruik.
    • Programming Concept:
      • Connecting the Microsoft Kinect to a computer, the data is read by the node system in CryEngine 3, allowing full control of the character, and interaction with the virtual space.
      • Additionally, an Arduino kit will be used to allow the Kinect system to track the user, negating it's small range and limited movement.
  • Test Proposal:
    • Outlining an area in a room that equates to our virtual space, the sensor is located in the same position it is in the game engine. A series of boxes are setup throughout the space, mimicking fixtures such as toilets and sink. A user is able to walk through the space and see onscreen how the virtual appliances and fixtures react to them.
    • To take this one step further, we will look into virtual reality devices such as the Occulus Rift, which would allow a much more immersive experience.



All this information was posted on our wiki to provide a common area for our members to communicate. The wiki will be used to document progress and provide reference for each project member to ensure a cohesive project is delivered.

Monday, March 11, 2013

Week 2

Group Allocations:
This week we where allocated into our groups for the upcoming projects. Oddly, the CV assignment was not used and instead the group leaders where simply those with the highest WAM, and students with lower WAMs could choose which group to join.
While this meant that each group had a common interest in the project, it also meant that the proposed diversity of skills was less organised and more luck-based depending on who was interested in the project.
The group I joined is comprised of:

  • Matthew Kruik
  • Steven Best
  • Laura Liu
  • Dan Zhang
  • Siyan Li
Our group is doing the Liveable Bathrooms project.


Team Roles:
Once we where in our group, we decided to split into to main teams, the programming/Kinect team consisting of Steven and myself, and a modelling and design team consisting of Laura,Dan, and Siyan.
It was also decided that, given his previous experience, Steven would be the group leader.

Project:
We are looking at the Liveable Bathrooms project and are hoping to create a system to track people in bathrooms using the Kinect sensor and using Cryengine to simulate potential interactive elements.
We will further refine this project plan in preparation for the back-brief presentation next week.