In the Knight Lab: Studio class, every Virtual and Augmented Reality project is a blank slate.
All of our developers are learning how to make VR and AR for the first time; many of them don’t even consider themselves developers yet. With the exception of a few projects which have run in successive quarters, we’re never building on something that already exists. We go from “New Project” to our final destination in ten weeks. The fact that we don’t carry much institutional knowledge from project to project is certainly challenging, but it’s also offered us the flexibility to evaluate which tool is the right one for each project as it comes up. I’ve identified four criteria that I think about every time I start planning a Studio project.
Accessibility of Production
- "What kind of hardware is required to be productive?"
- "What are the licensing terms?"
- "Does the tool allow diverse contributors feel ownership over the work?"
Portability of Skillset
- Does this tool develop skills that will continue to be useful as technologies change?"
- "Is this a tool that students are likely to use again in a job or further studies?"
- "Does mastering this tool enable the student to produce a wide variety of media projects?"
Accessibility of Distribution
- "What kind of hardware will be required to experience what we make?"
- "Will we have to navigate an app store or other approval process?"
- "How difficult will it be to test our experience?"
- "Will we be locked into a specific platform or operating system?"
Speed to Productivity
- "Will we be able to investigate our questions in the time that we have together?"
- "What is the tool's documentation like?"
- "Is there an active community surrounding the tool?"
- "What kind of support exists for working collaboratively?"
Accessibility of Production
Our students work on their own machines. This was especially true during spring quarter 2020 when students were working from home due to social distancing, but even when students have access to on-campus labs, a tool they can install on their own computer is highly preferable. I have learned the hard way that work that has to be done in a lab is work that gets procrastinated. A fluorescent-lit lab is not a great place to do difficult creative work. Students need to be comfortable. They need to be able to set up their own keyboard shortcuts and SSH keys. The laptop that I took to college was a hand-me-down from my uncle and by the time I was a junior the cord was fraying so the computer would only charge if you looped the cord around the screen and stepped on it with your foot to hold it taut–I want tools that will let a student create AR and VR experiences on that laptop.
In addition to the licensing terms and systems requirements, it’s important to me that the tools we use provide opportunities for students from diverse academic backgrounds to contribute equally. Many students from non-technical disciplines apply to the Knight Lab class because they are curious about becoming more technical. The goal of the Studio class isn’t to teach students to code, but we do want to give them space and tools to explore that curiosity. When content and asset management live in the same environment as the codebase, everyone gets to get their hands dirty. When I’ve used tools that are primarily operated by students from technical backgrounds, or require a machine with specific specs, that student often ends up experiencing a lot of stress towards the end of the quarter. Even in small teams that are only together for ten weeks, I try to avoid silo-ing members.
Portability of Skill Set
Accessibility of Distribution
This should be a priority for all XR creators, but it’s especially important for us. When designing VR or AR projects for Knight Lab, I always ask myself, “Why are we doing this inside a journalism school? Why isn’t this a project for a game studio or film production company?” The answer is usually people. More than figuring out how it can entertain, or even evoke emotion, our job is to determine how VR and AR technology can get people information that they can use. We don’t want our work to be confined to households with gaming rigs. We want it to be accessible to the widest range of people, in the widest range of environments and on the largest number of devices possible.
Tools which allow developers to deploy their projects directly to the Web provide a major advantage when it comes to ease of distribution. It’s a lot easier to give someone a link to click on while they’re reading an article to trigger an AR experience than to ask them to download a whole separate app. With the Web, you don’t have to worry about what kind of device your user has, you can build an experience once and build it for everybody.
Speed to Productivity
As I mentioned above, we convene teams for extremely short periods of time. Very few students come on to our projects with existing AR/VR production knowledge and most students don’t take the Studio class more than once. This means that we have ten weeks to get from zero to wherever we need to go. I need to choose a tool that operates at the right level of abstraction, but also ideally one that is well documented and has a robust community that students can learn from on their own. Ease of collaboration is also a big priority and one that isn’t particularly easy to come by.
Some of these values conflict with each other in ways that are somewhat predictable. Tools that offer the opportunity to become productive quickly often abstract away so much of what’s going on “under the hood” that trying to accomplish similar things in a different environment can feel like starting over completely. Many of the tools with the fewest system requirements can feel the most unapproachable for people coming from non-technical backgrounds. If we’re trying to build a really flashy proof of concept that pushes the limits of what we think AR or VR can do, I might prioritize speed to productivity over accessibility of distribution. But if we’re looking to do a lot of user research and see how people actually use what we build in the wild, I might make the opposite choice.
Unity is a cross-platform game engine that can build to the web, Android, iOS and all major VR and AR headsets. This means it scores very well on the accessibility of distribution and portability of skill set metrics. Unity uses an entity-component-system architecture that students will also encounter in tools like A-Frame and Unreal.
Unity has an asset store which enables students to easily import packages made by other people, many of which are free. It also has a large community of users and a lot of documentation and tutorials—some Unity-sponsored and some created by the community. Additionally, there are courses which use Unity in other departments in the university. Because of this, I expected that Unity would be the best tool for making usable experiences quickly. However, that hasn’t been the case.
The robust Unity ecosystem is great if you have experience teaching yourself new things and know how to identify what you’re looking for; however, it can be pretty overwhelming at first. Unity has been around for about fifteen years and the functionality has changed substantially between versions. It’s not uncommon for students to spend a significant amount of time trying to implement functionality without realizing it doesn’t exist in the version they’re using.
Version control solutions for Unity are getting more user friendly, but they’re still pretty intimidating for beginners, which can make collaboration difficult. We’ve found Unity to be most useful for projects where we’re working pretty closely to the current state of the art, rather than creating proofs of concepts that might be practical in the far future. Because you can build a Unity project to both Android and iOS, it’s a very good option for projects where we want users to be able to test the experience on their own phones in cases where negotiating the performance constraints of deploying a robust project on the web might take too much time.
Studio projects built with Unity: Location Based Storytelling With Augmented Reality
How quickly A-Frame can get you to your destination is highly dependent on what you’re trying to build. Need a 360 photo viewer? Great! That’ll take you about 11 lines of code. More robust projects do become more difficult and there are fewer tutorials and libraries. Productivity gains from working with A-Frame generally come from existing knowledge or desire to learn web development.
Sumerian has a GUI and uses the entity-component-system architecture like Unity but it’s entirely based in the browser. Since it’s part of AWS, hosting and deployment is a breeze. You can edit your Sumerian experience from any computer that can access the internet. There’s not a lot of buzz about Sumerian, so I don’t anticipate that students will run into it later in their careers, but it’s been a surprisingly effective tool for us. That being said, one of the disadvantages of using a young, little known tool is that Amazon might decide not to continue supporting it. Sumerian is free and designed to let you create AR/VR projects without writing code, but you still need to navigate the AWS permissions system which can be a little clunky if you haven’t had to use it before. The most frustrating thing about Sumerian is that most of the educational resources for it are extremely long videos which aren’t searchable. It’s much harder than it should be to just look up how to do things. There’s also not an easy way for multiple people to collaborate on a project at the same time. The fact that Sumerian projects are developed in the same environment where they’ll be viewed is very useful for helping students constrain their designs to things that will actually run.
Studio projects built with Sumerian: Contrasting Forms of Interactive 3D Storytelling
Earlier this summer Torch announced that they will be discontinuing service as of September 1st. I chose to include Torch in this round up because we’ve used it a lot and I think it’s instructive to explore why it’s been so useful for us.
Torch is an iOS app for building mobile AR. Mobile AR is easily the AR/VR platform most accessible to consumers. Torch products can be published to the web or integrated into iOS apps via the Torch SDK. Torch only runs on a single platform, but we’ve found that our students are much more likely to have new iPhones than powerful laptops so I gave it high points on accessibility of production. When we were remote in Spring 2020 due to the Covid-19 pandemic, Torch was by far the most practical tool for continuing AR development. Torch is narrowly focused around one kind of content—mobile AR—but it’s equally good for quickly prototyping a mobile AR project or developing one straight through deployment. Torch’s narrow focus and robust educational materials have made it very easy for students to grow their skills. Torch is a pretty unique tool and it’s unlikely that students will encounter anything quite like it again but it has been excellent for enabling us to quickly produce AR content that actually reflects what most users have the ability to access today.
Studio projects built with Torch: Information Spaces in AR/VR, Augmented Reality Features with 3D Food Photography
Spark AR effects are most likely to be used by people already on Instagram or Facebook, so you’re not asking your audience to download another app. However, a lot of work still needs to be done when it comes to thinking about how to distribute Spark AR apps in a way that makes sense for journalism. When you publish an Instagram effect, for instance, it will appear in the effects menu on Stories for everyone who follows your account. If one of your followers shares a Story with your effect, their followers will see a “Try it” button where they can choose the effect. Right now, the account that created the effect doesn’t have access to track these shares or view them in aggregate, which limits many potential journalistic applications. It’s also important to consider the limitations of creating content that’s dependent on Facebook’s platform as opposed to the Web or a proprietary app.
Studio projects built with Spark AR: Augmented Reality Features with 3D Food Photography
There is no one perfect tool for introducing students to AR or VR production. Each of these tools is uniquely suited to specific problems and scenarios and that’s perfectly fine. The important thing isn’t that students develop technical mastery of a specific interface. The tools will change. What creates lasting value is the exposure to new kinds of problems. When I start a new project, I think first about what we’re trying to learn from it. Do we want to do usability studies to figure out how people actually interact with immersive media? Then I’m going to be thinking about tools that will help us get productive quickly and make getting our work onto devices quickly. Are we making wild prototypes in an effort to demonstrate what we think immersive media will be capable of in five years? Then I might choose to prioritize accessibility a little less, if it limits what we can build. The unique environment of the Studio class gives us the opportunity to choose the tool that best serves each of these specific problems.