Tagged: Empathy

AR/VR: The 360 Illusion of Empathy

VR/AR and 360 video are in their early commercial stages. One of the first obvious applications is to put the viewer in the shoes of another person or animal. This will naturally allow the user to develop a greater empathy for the experience shown, however, in its infancy, VR/AR is a spectacle. Similar to the explosion of consumer content on the
Smartphone or the commercial release of television sets, applications and content, for the
most part, revolve around entertainment and gimmick. For the medium to generate impactful sympathy in its users, to the point where a change in characteristics and behaviour take place, the content has to be readily available on a day to day basis. This is clear in daily TV news, charity advertising and radio broadcasting.

Experiencing 360 footage taken by a homeless person or a refugee is impactful and begins a cycle whereby the user can relate to the everyday surroundings of that person. In order for change to take place, this cycle needs to come around to help or aid the people in need. At the moment, it’s clear that 360 footage is giving an insight into these surroundings, but isn’t necessarily generating any more impact from the empathy experienced than was done so before with sound, photography, film and live information updates.

This is not a completely pessimistic outlook. There is a definite possibility that Virtual Reality will trigger sympathetic gestures and actions, or perhaps alter behaviours in the near future.

There’s no doubt that it will be similar to previous broadcasting technologies, and potentially more significant in achieving sympathetic reactions. Examples so far are from living in Gaza, to life in the Ebola crisis or Chris Milk’s Clouds over Sidra. All of these are intense struggles for those involved and the 360 content certainly gives an insight into the everyday life of the subject, however I couldn’t help but feel uncomfortable by my lack of ability to do anything other than view these terrible and tragic scenarios. Then again, maybe expressing that feeling is the impact that it has. At what point does voyeurism in this capacity become too life-like? And after experiencing such a life-like tragedy at what point is it wrong not to act to help those involved or those in similar situations?

There is an element of sympathetic thought whilst engaging in these experiences, but at the same time, like the consumers of photography and film, it is much farther from sympathetic interaction than is needed to make a difference in these crises. New users are understandably hung up in the spectacle of Virtual Reality or 360 videos but to push this forward, the documentary content needs to play to a continuation of media and an ability to interact with the people concerned. This is for me the main problem with television, but also the beauty of Twitter and other interactive platforms. I hope that 360 video can find a balance between voyeurism and positive interaction / charitable giving if it continues to focus on such mammoth crises of our time. I also wonder whether 360 broadcasting will become a centrepiece for news broadcasting. There are a number of ethical questions that arise if this was to be. In the heat of the moment, the camera can’t be turned away, only turned off!

AR/VR: Sentient

The work I produced focused on the homeless issue in Portland, and I guess, in a larger context, the issue along the West Coast. I’m a snap happy photographer with a mentality that rarely stops me from taking a photo, no matter the scene. Whether taking the photo is correct or not isn’t my interest, it’s the use of those photos that’s important. Maybe it’s terrible to use photographs of others without their permission under self-defined artwork… I never intend to use a photograph maliciously or perversely but instead as documentation. ‘Sentient’ was a chance photograph of a man on SW Ankeney Street, outside Bailey’s Taproom. Quite bizarrely, there are weathered pieces of cardboard, and a leftover sign in almost the exact spot on Google Street View. I took the photo on my first day in Portland (Election Day 2016). The man asked me for $5 and I could take a photo. I took the photo but the settings were wrong and it appeared too dark on the screen. I asked to take another, and he asked for another $5. At this point both himself and I were getting angry at each other. He tried to grab the camera, threatening to break it, and I walked off in frustration without giving him the money. That night, I look at it on my laptop, turn up the exposure, and to my bittersweet disappointment, it was a good photo. I felt terrible that I hadn’t paid him and for the following few days looked around for him just in case. 4 or 5 days later, I had some dinner at a food stand on the corner of SW Washington & 10th Ave, and lo and behold he is in front of me asking for a free meal. So I paid off my debt by buying him dinner, told him he looked great in the photo and I was going to use it for an artwork. He remembered me and appreciated that I paid off the debt! I was lucky.

From then on, I was interested in creating a scattered mesh of the photograph that was only viewable if you were to sit on the floor in the same position as the man. It’s a simple concept, the viewer has to go down to his level to sympathise with his situation. I drew the photograph into stencil-esque block shapes and by importing an .svg file into Blender and extruding the faces I created a 3D representation of the man. From here I focused on the idea of the mesh shedding into a murmuration in the Oculus but the result was inferior. (The best part of this experiment was playing with the powerful Unity particle system with Philippe.)

Luckily, in the last two days, Thomas twisted my arm into focusing on the Hololens. Philippe and Thomas managed to create an algorithm for the perspective of the mesh to work perfectly when the headset aligns with the segments. This took a number of attempts, but with around 2 minutes to go on the final day, Thomas managed to get the experiment to pull through. It wasn’t as expected, in terms of the necessity to get on the ground to view him but I was amazed with what Thomas and Philippe had been able to do. It was a very well executed collaborative project and I had no chance of achieving it without them. It seems that the final piece forces the user to make an effort to see the figure. I’m sure without prompt, many users will just walk around the shards, disinterested. I would love to see its outcome in a gallery setting.

If I was to take the piece further, the mesh needs cleaning up, texturing and I’d love to
somehow add a geotag to the location I found him. This would allow people to wander past
and see the sculpture in AR.