The National Science Foundation has awarded the Institute for Immersive Designs, Experiences, Applications, and Stories (IDEAS) at American University (AU) $1 million to establish a state-of-the-art, high-quality volumetric capture system that will support multidisciplinary research projects. The Institute has also signed a collaborative agreement with a private industry partner, TetaVi. The agreement will provide $430,000 in funding over the next three years to support the volumetric studio operations. Volumetric capture is a computer vision technology that enables recording the topology of objects and people rather than, as in traditional video recording, the projection of three-dimensional objects onto a two-dimensional surface. Volumetric capture presents an interesting, yet unexplored, research potential across many domains, such as healthcare, education, entertainment, visualization, and communication. Due to high costs of system acquisition, volumetric capture and its application is currently being researched only at a handful of US research centers, and AU will have the only studio in the region. Krzysztof Pietroszek is the Principal Investigator on the grant, with co-PIs Philip Johnson, Braxton Boren, Bei Xiao, and Larry Engel. The School of Communication (SOC) sat down with Institute Director and SOC Assistant Professor Pietroszek to talk about the Center and its work.
SOC: Congratulations on the launch of the Institute for Immersive Designs, Experiences, Applications and Stories (IDEAS), established in fall 2020, and the $1,000,000 National Science Foundation funding award for a volumetric studio. What prompted you to create the Institute and what are your goals?
Krzysztof Pietroszek: Thank you. The institute is an extension of the IDEAS lab, which I created in 2018 when I came to AU with the objective of bringing immersive media to the SOC. I noticed that at AU there are many people from a wide variety of disciplines interested in immersive media and technologies, but who don’t have the expertise to incorporate it into their projects. So, with a number of AU colleagues from SOC, the School for International Service (SIS), and the College of Arts and Sciences (CAS), I developed the proposal for a creation of an Institute that helps faculty and students in creating immersive media stories in a variety of fields, from science to education to journalism and filmmaking. In addition to partnering across-campus with many AU internal units, we also collaborate with outside partners, such as George Washington University, California State Polytechnic, Graz University of Technology in Austria, the University of Silesia in Poland, and the University of Waterloo in Canada, my alma mater.
SOC: Tell us about some of the projects currently in progress.
KP: We have more than two dozen research and creative projects in the works right now, which is very exciting. Some have had to be put on hold because of the pandemic, but we are eager to work on them in the near future.
All projects of the Insititue are related to immersive technologies. The first example is a project we are developing with SOC professors Maggie Burnette Stogner and Larry Engel. Called Forest VR, it is a Virtual Reality forest in which you can perceive the world as one of the animals who live in the forest. For example, if you embody a wolf, you see the world in a bluish tint, because wolves see less red and more blue.
Another favorite is Vera, a narrative immersive story based on Katherine's Mansfield short story A Dill Pickle. The unique thing about this project is that it’s the world’s first tabletop holographic film -- which basically means you see this film as a little doll house hologram in front of you on a table. You can watch "Vera" from different perspectives -- you can go closer, you can go out of it. As an analogy, in Star Wars they play holographic chess, and there is a Princess Leia hologram. That's how it looks -- a hologram of a film happening in front of you. It’s a very exciting project and a bit difficult to explain without showing. This project comes in multiple forms and is best to see it yourself. I had the pleasure to present this and other immersive experiences at Cannes Film Festival as an exhibit and an invited keynote.
Also fascinating is a project of Dr. Bei Xiao in the CAS Computer Science Department entitled Mass Perception of Virtual Objects. Dr. Xiao is working on ways to simulate perception of different objects in VR in such a way that they feel real. In VR you can display anything you want. Let's say you display a stone and a balloon. How do you make people feel that the stone is heavy and the balloon is light? Dr. Xiao works on developing perceptual cues to use, such as sound and touch. Imagine interacting with field objects and grabbing them. For example, imagine interacting with objects by grabbing them. If you grab a stone, you will feel that it's heavy.
TheNuclearBuscuit project uses a VR experience to better understand decision-making during a nuclear crisis. It is a collaborative effort between SIS faculty Dr. Sharon K. Weiner and Moritz Kütt at the Institute for Peace Research and Security Policy at the University of Hamburg. The project name is a play on words of the nickname ‘’biscuit” which refers to the presidential identity code required to authorize any nuclear attack. Dr. Weiner recreated various rooms in the White House such as the Oval Office and created an accompanying narrative. In it, you, as the President of the United States, are forced to make a decision over a very short period of time about the best response to an atomic weapons attack. Through VR, you head to the bunker, where your generals present data and await your decision on what to do. Of course, they have their own opinions and exert psychological pressure to get their way. As it turns out, the information was mistaken, and there was no weapons attack. Using this interactive game, Dr. Weiner is able to illustrate how easily people can be manipulated with limited information into making a disastrous decision. It's essentially a psychological thriller where you have to think quickly and rationally in an extremely high-stress situation.
Also very interesting is a project done in collaboration with Dr. Malgorzata Luszczak from the University of Silesia, using a brainwave scanner. The scanner has prongs which are like non-invasive fingers that are placed on a person’s head. The scanner estimates the emotions of the person based of the electrical signal generated by the brain. Dr. Luszczak and I then create visualisations representing the emotions, similar to what abstract expressionists do. We call it Brain Art and expect the project to completed next spring. It's basically an artist-directed reflection of your unconscious presented in an immersive medium.
Another fun project we are working on is an immersive artificial intelligence actor called Adam, who learns how to perform Shakespeare’s Hamlet. A user acts as the director and can control Adam using voice commands. For instance, I’ll say “Well, your interpretation felt really inauthentic, you really should think about it this way.” And Adam adapts to that direction. But of course it will take years to teach Adam enough acting skills to be a trully good actor.
The last exceptional example is our 3D printed robot named Damian, an extended reality project. The immersive media element with Damian is that we are using Virtual Reality (VR) to embody the robot. We also have plans for Damian to play in a film as an actor. Developing a possibility to control the robot from a virtual reality environment would allow a human actor to embody the robot on a filmset, instead of programming every single move of the robot. The robot’s name is Damian because he was partially built (3D printed) by a very strange Polish man living in a remote village, a man right out of a Tarkovsky’s movie.
Normally, it would have taken us about six months of non-stop printing to create a robot as large as Damian, which is composed of 200 parts, is six feet tall and weighs about 90 pounds. SOC students actually attempted it but ended up finishing just one arm before the pandemic hit. Since we couldn't print the rest of the robot at the university, I asked around and found him on one of the many forums I participate in. The man who had 3D printed the robot said he could sell us a nearly complete robot but under the condition we keep his name, Damian. He explained that ‘he is like my son.’ He really felt he was giving us part of his child for adoption. He sold us the body, but not the software of the robot, because, for him, the software was really the soul of the robot. We are working on giving Damian a new soul that we are programming from scratch.
We are also working on Damian's gestures. The idea is that the robot's body language would fit what he was saying in response to what you say to him. The robot “understands” almost any language supported by Microsoft voice recognition. But of course, the understanding is limited. It doesn't understand the semantics of the words he hears. It's "understanding" is entirely based on the keywords. So if you say “Who are you?” it will "understand," and will answer "I am a robot. My name is Damian." But if you ask something more complex than that, it is going to be like a conversation with a chatbot that makes no sense. Very soon you’ll realize the limitations of his understanding of spoken language.
SOC: Is there a certain age level of intelligence that you can compare the robot’s intelligence to, for example, let’s say a five-year old?
KP: This is often used as an analogy, but I disagree with this comparison. I think it's offensive to a five-year old. Sometimes, it’s said that the current General Artificial Intelligence (AI) is on the level of a cat. I think this is offensive for a cat, too. My brother has a cat and I think animals are very smart and have a certain level of consciousness that robots don’t have and won’t ever have. For example, when teaching a robot to converse with you, all subtext is going to be lost, while a cat will understand your emotions better than many humans. People use ambiguous speech all the time, not realizing it. As long as you are working within a narrow and limited field with precise language and terminology, you can provide the robot with enough information to have a conversation, but the moment you introduce any ambiguity, the robot is unable to catch it.
SOC: These are all fascinating opportunities to learn about these technologies. How would you define the goal of immersive technologies?
KP: At the Institute, our goal is to develop, apply and understand immersive media in education, journalism, filmmaking and health. Because these technologies are so powerful and persuasive, their impact could be immensely positive. But, as with other platforms, the potential for negative impact also exists, so we have to be careful. For example, while we want to develop the best possible immersive media, the more realistic it is, the harder it will be to differentiate between reality and immersive media. This is probably the first medium which can really achieve “Matrix” on a near-perfect level where you don't know that you are seeing something which is not real. So additionally, one of the goals of the Institute will be to research the ethics of VR and try to find applications which would have a positive impact. Ultimately we need to realize that the development of any powerful new medium can have both positive and negative consequences.
SOC: Is there an organization focused on the ethics of immersive media?
KP: As a field we address this important issue through research, at conferences and in academic papers. But I am not aware of an organization that is devoted to immersive technologies ethics and policies specifically. It is a gap that needs be filled.
SOC: How does ethics factor into what you are doing?
KP: I would refuse to engage in a project which could be potentially unethical. The problem, of course, is that there may be an unethical implication that I haven’t thought about. In general, there are a lot of potential for ethical issues with immersive technologies. The problem of misuse or the problem of persuasive power of the immersive medium itself are two examples.
But, surprisingly, more often than not we find that the immersive media are enabling rather than limiting. For example, there is a general problem of a digital divide -- the idea that some people have access to technologies and some don’t. Many people thought that immersive technologies would be the worst example of digital divide — the immersive content would only reach people who have a virtual or augmented reality headset, which are expensive. But, surprisingly, the problem of access is mitigated thanks to the existence of smartphone-based immersive technology. Many current smartphones, and soon nearly all smartphones, are able to display augmented and virtual reality content. On top of that, the proliferation of smartphones is absolutely stunning. In many developing countries one may not own a computer, there may be no telephone landline, there could even be problems with access to clean water, and yet, you can be fairly sure that there will be a robust mobile network infrastructure and many people will have access to a mobile phone line. Even if the data connection on this phone is going to be very slow, and you won’t be able to watch YouTube, you will be able to use immersive applications.
As an example, we are working on a project called UniVResity where we use only a voice connection of the mobile phone to create a virtual world of education for people in refugee camps. While many people in a refugee camp have a smartphone, they often lack a fast internet connection. United Nations is not doing enough to provide fast internet access and data access discrimination is a daily experience in the camps. So we developed a technology where you can download an app somewhere where you do have an internet connection and then just over voice connection you transport everything you need to have the immersive experience of being in a real classroom. Currently a prototype, in the future this technology could allow you to access classes at Harvard, American University, or any other university, remotely. These lectures would be viewable and interacted with in real time by a student in a remote area with just a smartphone-based VR headset using just the voice connection. YouTube can’t be done over voice, but the immersive classroom can. This is an example of how immersive technology can actually cross over the digital divide, as opposed to being exclusive. It’s an exciting idea that shows what’s possible for the future.
SOC: How did you get involved in this field?
KP: I was very interested in VR since I remember, probably due to my fascination in so called hard sci-fi. As a doctoral student in Canada in 2004, I wanted to focus my dissertation on Virtual Reality. But my advisor disagreed and assigned me a different topic that just wasn’t interesting to me. So I switched gears and first got Master’s in Communication Studies, and then went to film school for my MFA. After MFA, I was drawn back to VR. At that time there were more opportunities and interest in the field of immersive technology because the technology itself became more accessible. I was able to pursue a PhD in 3D interaction, an important subfield of immersive media. I am thrilled indeed to be able to combine my expertise in film, communication, and computer science with my interests in this exciting field.