Thursday research bulletin 16.7.2015

So this week’s been pretty much packet mainly with different kinds of research articles. There’s especially the mentioned Curtin Business School Colloquium article in the area of HCI, virtual embodiment and human-technology coupling, which I pretty much re-wrote although I had some of it ready from before. I guess part of research “rigour” (the word we all love) is that you will re-do everything if you feel something is not right – even if you don’t feel like it. Anyway, it looks much better now and perhaps I can extend it to a “real” journal article somewhere along the way.

From the Web

In the VR section, I guess one of the interesting things this week was HTC announcement that finally something actually tangible might come out in October:

I’ve also followed Leap Motion’s virtual hands concept with an interest:

Naturally I would like to explore it with Talos Principle which I’ve recently played:  

I guess in the section of embodiment and “Oh well, it’s only a year or two when Tony Stark’s gadgets start to look old”, this post sort of made the day:

Advertisement

A couple of notes about the philosophy of human-computer interaction

Recently I came across a four video series where Dag Svanaes discusses how the philosophy of Heidegger and Merleau-Ponty could advice better understanding of interaction design. The first of the videos below, with a couple of quotes that I found especially basic and useful.

“To actually experience interactivity, you have to engage in some kind of interaction. And it is only through this interaction that the interactivity of the object appears to you. That you perceive it. Objects have various affordances, interaction is created by you in the interaction with the gadget.”

From video 2: “What it is (for example a pen), is just matter in space. It becomes something through use and social negotiation.”

I find this as a very important notion, especially when I’ve been reading more and more studies that have taken existing game engines to be used “seriously” in professional training, such as in mining. Often video games have certain affordances, such as exploration and interaction in general (naturally there are differences between Tetris and Halo). Such affordances are always restricted by the underlying programming and choices by the developers (invisible walls, I can’t go and eat a burger in Halo and in Borderlands I cannot die hitting the ground even jumping from the tallest building). Still, it seems to me, something is always stripped away even more when games are “assimilated” into education. In worst cases, they become something else than games. Through actual use, they become mere powerpoints.

Embodied HCI: Testing the Myo Gesture Control Armband

I have quite a neat field of study, I have to say. At the same time I can contemplate matters of the digital, but also try out the latest tech that’s not even officially out there for consumers. Like this cool little thing called Myo, that hopes to be part of “building the future of human-computer interaction”.

Myo is a device that you put around your hand and use to control whatever on a computer. The device recognises arm position and different gestures you perform with your hand (swipes, tap with fingers etc.).

Some current applications I’ve seen advertised are using it for presentations (flipping slides and stuff, yeah I know, rather boring perhaps), some games, and controlling tangible real life objects such as robots.

The installation of the device on Mac was quite fast, and syncing gestures worked quite well. In the initial tryout we also downloaded a Myo computer cursor integration. My first impression is “cool, but needs some further thinking perhaps”. The cursor moves quite well while moving the arm, but if you use certain gestures for example as a right click of a mouse, it also moves the cursor, which makes you miss your target. Perhaps assigning mouse clicks to a different gesture repairs this, but as said, I just tried it quickly today.

Connections to my study in HCI

In my own study I’m currently looking into embodied accounts of human-computer interaction (HCI). HCI has been troubled by a view of a disembodied mind (i.e. the user). A description of more like a disembodied information crunch machine than an actual human being who experiences surroundings more broadly through different parts of the body and using them to interact with various objects.

Some studies I’ve recently read, like the one in game studies by Farrow and Iacovides (2012), say that we should try to invent more coherent ways to understand embodiment in the digital or virtual environments, and that this could lead to better understanding of HCI. Still in my opinion, mouse and keyboard and game controllers can similarly be considered as an embodied experience, as the controls become transparent and allow (if things go right) us to interact with objects on the screen. Think about for example a simple example of a computer cursor: When I want to move it, I do not think about the device (mouse) that mediates the movement. I am moving the cursor.

There currently seems to be a bit of a confusion about new full-body-gesture-whatever controllers and if they are somehow more immersive. Farrow and Iacovides (2012) actually criticized this as a wrong assertion: experienced gamers feel very strong engagement with the activity on the screen just with a regular game controller. Simply put: as a controller used with hands, it is still embodied interaction. Using the body, gestures or voice recognition to control games actually show mixed feelings for example with Skyrim and Mass Effect. Interestingly voice commands might even distance oneself from the game. Well think about it, isn’t a bit uncanny to command yourself by saying “Marko, do this and this”?

Recently Sarah Zhang in Gizmodo Australia criticized that VR does not allow us to feel similar kinds of movements what we see on the screen. There are some good points in her thinking, but I think generally some of her ideas are slightly misleading. If movements themselves were the issue, I couldn’t play video games with a monitor or a TV screen either. Still, I do not get nausea or “cybersickness” with screens. I am also very much immersed, engaged or involved (whatever these definitions fit for you) with the game. So the question is, if the problem actually lies with the stereoscopic display technology, instead of the activity and its correspondence to our body.

Anyways, interesting times to examine what (embodied) human-computer interaction means nowadays, and how should we talk about it.