The following are more (long) excerpts from a Slack chat with members of The Little Rebellion. part of Nancy Heiz's SUNY New Paltz's advance editing journalism class. Here's part I.
In this follow-up post, I'll explore immersive storytelling and emerging technologies, specifically virtual reality and 360-degree content as they relate to journalism and small newsrooms.
Let's get right to it!
Will virtual reality have a transformative role in journalism?
Virtual reality is not going to fundamentally affect journalism in a revolutionary way, for a number of reasons, which include its limitations (too expensive, blocks your view or reality); and penetration (not many people have it and nobody has it at all times).
It is, currently, another tool that helps tell stories in a new way.
However! Some of virtual reality's parts and other technologies are starting to form a new paradigm, which I do believe will change the way we consume information in a fundamental way, which could shake and disrupt journalism's forms and distribution, just as the Internet and social media have done before.
Because with these new technologies, everything's a platform.
On their own, a number of these emerging processes have failed to live up to their potential or over-hyped promises (hello QR codes and Google Glass). But if you take these technologies' best traits and seamlessly incorporate them into existing everyday affairs, then you can see how the way we get our news will change, and we won't even notice.
For example, QR codes 'failed' because they were cumbersome to use and goals for them were too ambitious. But they didn't go away, they were simply absorbed by other technologies to perform modest and realistic functions— from Facebook Messenger to Snapchat codes, to authenticator and recognition apps, which include Google cardboard viewers.
Similarly, there are many good things that came out of Google Glass, and you can see its influence in Spectacles or the emerging augmented reality field.
#neverforget |
Right now, basically, we are in the baby stages of this emerging tech. So you need a phone to catch a Charizard (I know, 'nobody' is doing that anymore, but whatevs). The next step is probably glasses that add 3D models to your view; the one step after that is probably holograms and screenless displays.
How would your favorite news provider look like in a mirror when you're brushing your teeth? How do you design the news for a fridge (already, there are news apps for smartwatches and Alexa, etc.)
Take artificial intelligence, for example. AI is already here, although not quite in the way we think of it (pick your favorite sci-fi movie or Boston Dynamics gif).
via GIPHY
From algorithms to automatic captions that learn its user's accent, there are a number of processes that already use artificial intelligence aid in the production of stories.
A lot more of this is aided by not-so-intelligent processes (automation) with code that scrapes primary documents for content, for example, or Twitter robots, or AP articles written by robots for basic business stories or sports results (and also video).
All these are supposed to make the job easier— as they should and do— but they will also displace those who do these jobs to a degree, depending on how it's applied in the industry. I know the academic version of this is that AI and automation will make everything simpler, but it will also displace those who do this currently (and have not developed new skills).
I can certainly see corporations overreaching and try to replace actual humans with 'machine-learning' and 'funnels' of content.
via GIPHY
But generally speaking, the change is positive, in that a single journalist is able to do a lot more thanks to technology. The analogy I'd use is doing old-school research of an academic paper's in a physical library to find a quote vs. finding the same quote quickly doing a search via Google Scholar.
Our small newsroom's experimentation with these emerging technologies is part of a realization that platform adaptation is a constant.
We go where our audience is, yes, but the idea also is to have a good handle on these platforms and technologies by the time they become commonplace.
So when is this happening?
In a sense, it's all here already. Your Android or iPhone (6 or higher, sorry) is a VR device. All you need is a viewer, and they are cheap.
There are tons of apps, and everything that's on Youtube can be seen in VR in the YouTube app. It's entry level because the quality is limited by the phone capacity and connection bandwidth, and you can't move around in the space or grab things. But the 'wow' factor is there, and it's a good introduction to VR.
If you get Google Streetview app, you can use the same phone to take a 360-degree picture that you can view in VR with the very same phone and a Google Cardboard viewer.
If you want to check it out yourself, I'd start with the cardboard app, and then some VR roller coasters (just do a search for that in the app store). For story examples, check out NYTVR, RYOT (via Apple or Android) and Within. There are many others, and, yes, some stories overlap among the apps.
If you want to make some 360-degree and virtual reality content in a smaller newsroom, it's not reasonable to create it's not super simple to do. That's just a reality.
This is something we had in mind when we purchased a Ricoh Theta S over a year ago, with the idea that it would be as easy as pressing a button (which it is) and then the upload happens in the background or at the office, and everything else that's part of the reporting still takes place as usual.
The Ricoh Theta S is not good for videos, though, as the resolution is too low (1080p is not good when you have to wrap the pixels in a sphere), so we're playing with an Insta360 Nano with an iPhone, where video resolution is better but it's still easy to produce.
(For the cost-conscious among you: The Ricoh was $360 at the time. The Insta is $200).
Both have their own apps which make simple stitching and editing and that's enough for us.
But is it needed?
Are photos and videos needed?
A good starting point for when virtual reality or 360-content is not needed is to ask yourself a simple question: When planning to cover a story, is this something people may want to look around?
Many times, a reporter or photographer has gone to cover an event with the camera and returned without 360/VR content because the story did not need it or it was not practical to do it, and that is OK.
As a small newsroom, we're not going to be doing high-caliber documentary-level virtual reality stories. So, covering stories virtual reality and 360 can be a challenge.
We've done some protests, festivals and sporting and weather events, which fit our resources and capabilities.
Larger publications and outfits, are doing more immersive, personal stories. Reveal did an interesting, character-centered narrative, "Disfellowshipped" Among its many empathy-centered narratives, StoryUP did one on what it's like to experience a stroke. And then there's PBS' Frontline,
But if you're a small publication with limited time and resources, and you're trying to create something that will resonate with your community the immediate ideas that come to mind are covering a protest; or a travel piece; sporting and fairs; and weather events, like if there's a storm or flood, as these are news events that have more immediate impact when they happen.
This is basically a recognition that, in our newsroom, at least, we won't be doing a documentary-level production (which we don't do with regular video either) and those are the ones that really can push the envelope.
Furthermore, there are current discussions about the proper length. How long will people watch a VR feature that's not interactive? For our purposes, we top the experiences at two minutes longer if doing it live because that's another matter). Yet another reason we do them short is because of processing time.
But these are not rules. Everyone is currently trying different things, from the New York Times' Daily 360, to AP , to our less polished 'At the scene'-style videos
It is worth pointing out that for us, Facebook is best for distribution of 360 videos and even 360 live videos. We are not producing content to be necessarily seen in a virtual reality setting because there are not that many consumers in our community with VR devices.
(As a side note, Facebook — and Vimeo— have 360 controls; Facebook and Youtube both have automatic captions, as well. Thanks, artificial intelligence!). But a Facebook embed won't work on all mobile devices. And Youtube embeds will stretch the video.
I haven't talked about monetization— it wasn't part of the chat — but I wanted to touch on this briefly. Currently, virtual reality can work better as a complementary tech when it's rolled in with the other news offerings, so the monetization is already built around it (YouTube, page views, etc). That is likely the most reasonable way to go, unless, of course, you have your own tech and players and apps and you are the New York Times, you bastards.
But the main purpose from a news perspective is to better inform your audience. Virtual reality and 360 content have the potential to bring your audience to the very place where the news happens. The basic production is easy and relatively cheap to do, and getting easier, and the platforms are already there, with built-in audiences to boot.
So, as a tool of the trade, today, 360-degree content and virtual reality can be a reasonable offering from a small newsroom.
Go break some things.