As news consumption evolves, The New York Times’ R&D Lab department is at the forefront of redefining how stories are told. Their goal? To enhance the narrative experience without sacrificing journalistic integrity.
With a focus on exploring how emerging technologies can be applied in service of journalism, the department is proving that innovation can enrich journalism rather than distract from it. But it’s a fine balance. In a landscape littered with shiny new things, where innovation is often considered a buzzword, the R&D Lab team pioneer new technologies to serve the story.
Scott Lowenstein, director of research and development strategy at The New York Times, oversees the R&D team, leading experimentation with the next generation of storytelling. The R&D group is a mostly technical team that works on applications of emerging technology in the service of journalism, but also for their products and business, he explained.
Over the years, the definition of what qualifies as emerging technology has continuously evolved. The New York Times’ R&D Lab has explored a wide range of technologies since its inception in 2006, focusing on innovative storytelling methods. From a commuter app in 2009 that combined traffic cameras, Google Map data and location-specific content from the Times, to a “magic mirror” that used motion sensing and voice recognition technology in 2011, to wearables in 2014.
The launch of virtual reality (VR) in 2016 marked a significant achievement, as the R&D Lab created over 30 immersive films that showcased the narrative potential of VR. The films received high viewer engagement marked by over 1 million downloads of their VR app. In 2017, the team supported newsroom storytelling by integrating emerging technologies like mixed reality and connected home devices, while also improving public figure identification in photos through automation.
By 2019, the R&D lab delved into advanced technologies, such as computer vision to reveal hidden narratives within visual content, photogrammetry to create immersive 3D environments from 2D photos, and spatial computing to explore new storytelling formats in augmented reality. They also worked on enhancing media transmission and natural language processing (NLP) to extract insights from the NYT’s extensive archive, while developing tools to combat misinformation.
In 2021, the team developed the Times’ first AR game and animated sequences for augmented reality, continuing to explore spatial journalism and improve media transmission with 5G technology. By 2022, they emphasized integrating locative data and spatial understanding into journalism, aiming to better reflect and connect with communities, thus enhancing storytelling.
“Never ending cycle of exciting, interesting problems to tackle”
A lot of the R&D team’s work starts when Times’ staffers come to them with an ambitious project they’d like to do, but don’t know where to start. Lowenstein said the team does their research on technologies, the latest tools in the market, and projects in adjacent industries from architecture to game design.
“We prioritize the ones that feel achievable. We find the technology to match the ambition of the story,” Lowenstein said. “We will oftentimes say, this technology is really cool, but it’s not ready. But, when we do find an idea that works and match it well with the story, then we’ll help that team come up with the first execution.”
And, when the team finds a technology that works, they’ll find ways to make it repeatable, so that future stories within The Times can use it. “We’ll also often open source our work and share it with a broader journalism community to help replicate what we’ve done.”
“Help us do something that we couldn’t do before”
Currently, the R&D Lab focuses on three areas:
- media transmission and provenance, which helps readers distinguish credible news from mis- and disinformation;
- artificial intelligence (AI), which includes computer vision, which extracts information from visual media; and
- spatial journalism in a broad sense, which encompasses things from 3D to mixed reality and virtual reality.
“The mission is to try to use these tools in service of great storytelling and not try to do something gimmicky. Instead, we want do something where this new tool could help us do what we couldn’t do before,” Lowenstein said.
A lot of the projects are seriously ambitious. They’re not only hard to do, but they also require a large team of technologists, editors, and resources. So, the Lab is invested in how to facilitate this process.
“A lot of our ambition is trying to just make that whole process easier for the teams so that they can just say, we want to do this and that they can do it the same day,” Lowenstein said. “Some of it is building software and templates and things that don’t require a super-specialized 3D engineer to build a 3D story. That’s where a lot of our effort goes: finding those repeatable things and creating the tools that allow editors to do that work with less or no help from us.”
Some of Lowenstein’s favorite stories were during the early days of augmented reality. “We built a lot of stories that we published as 2D stories on the web, and then translated specific slices of them into AR, that people could use on their phones. Some of those were very well received and we learned a ton from that experience.”
Balancing innovation and integrity
The New York Times R&D Lab is not interested in employing the latest technology just for the sake of doing it. Technology is a tool in the service of their mission, not an end in itself. “One of the great things about working with our amazing journalists and editors is that their gimmick meter is really strong,” Lowenstein said.
Timeline of innovation at The New York Times’ R&D Lab
2006 – Launch of R&D Lab
- Foundation: The New York Times R&D Lab was established to innovate and experiment with new technologies and storytelling methods.
2009 – Multimedia Applications
- Commuter App: An innovative app that combines traffic cameras, Google Maps, and location-specific Times content, enhancing news delivery through geocoded articles and blog posts.
2011 – Interactive Technology
- Magic Mirror: A data-bearing mirror using motion sensing and voice recognition to deliver on-demand information via the Times’ APIs.
2014 – Semantic Listening
- New Tools: Developed systems for extracting semantic information and crowdsourcing data on cultural artifacts.
2015 – Wearable Technology
- Focused on wearables, examining their potential uses in the media landscape.
2016 – Virtual Reality (VR)
- Created over 30 immersive films.
2017 – Emerging Technologies
- Shifted focus to support newsroom storytelling by integrating technologies such as mixed reality and connected home devices, while also improving public figure identification in photos through automation.
2019 – Photogrammetry, Spatial
- Computer Vision: Revealing hidden narratives within visual content.
- Photogrammetry: Creating immersive 3D environments from 2D photos.
- Spatial Computing: Exploring new storytelling formats in augmented reality.
- Media Transmission & NLP: Enhancing photo and video transmission and extracting insights from the NYT’s extensive archive.
- Misinformation Tools: Supporting informed news consumption.
2021 – Augmented Reality
- Developed the Times’ first AR game and animated sequences for augmented reality, while continuing to explore spatial journalism and improving media transmission with 5G technology.
2022 – Spatial Journalism
- Emphasized integrating locative data and spatial understanding into journalism to better reflect and connect with communities, thereby enhancing storytelling.
Technology evolves rapidly, and having a solid understanding of emerging tech applications is incredibly valuable. The R&D team began exploring large language models back in 2019. Now that these models are widely adopted, they have a significant advantage in knowing how to apply them responsibly.
Lowenstein emphasized the importance of transparency in their work. “We’re always going to tell our readers when and how we’re using these emerging technologies,” he said. “I view it as a duty to our readers to explain how we’re using these technologies. And in that process of explanation, that is the process of balancing storytelling and integrity in action.”
Those principles show up in reader-facing ways. This is evident on their website, where stories include methodologies, labels, and sometimes tutorials or explanatory text. Those principles also show up in industry-facing ways. The R&D Lab shares their work as much as they can on their website, and through open source libraries for others in media.
“I think that’s one of the great benefits of having this kind of open source mentality is that we can see, we put our stuff out there and we can see how the community is using it,” Lowenstein said.
A lot of people are using the photogrammetry, the spatial journalism and 3D storytelling, using the libraries and contributing to them, he explained. “It’s super gratifying to see this little community built up around some of the photogrammetry, the spatial journalism. It’s awesome to see how people have taken that work and evolved it and made it their own.”
Technology should be in service of the story
As technology and journalism continue to evolve, other organizations can glean valuable insights from The New York Times R&D Lab’s approach to experimenting with emerging technologies. For one, Lowenstein emphasizes that technology should be in service of the story. “That’s first and foremost. We don’t do it for the sake of doing it. The story should come first and you should really look through if the ambition of the story warrants it.”
Embracing experimentation is equally vital, and even though trial may lead to error, each misstep could lead to unexpected insights. “We often try things that don’t work the first time or the fifth time, and then you find the perfect fit,” Lowenstein said.
Lowenstein suggested keeping a record of what you’ve tried and what you’ve learned along the way. He says that oftentimes, things will pop up you didn’t know would be useful in the future. Keeping records allows you to adapt to new possibilities.
Finally, the director suggested that it is important for media companies to have principles about how they use technology responsibly and in service of their work. It helps guide decision making.
By establishing clear principles for the responsible application of emerging tech, organizations can carve out a focused space to innovate while delivering quality journalism that resonates with audiences. Ultimately, it’s this blend of technology, experimentation, and integrity that will empower the industry to navigate into the future.
“In some ways, having those clear principles about when and how you use technology is a great creative constraint. It allows you to define the parameters of how you want to use the technology in service of quality journalism. And, working within those boundaries from the beginning makes it achievable and makes the space of things that you could do smaller so that you can actually focus on the right stuff,” Lowenstein said.
The opener art comes from The R&D Lab's work with LiDAR and Drone Automation for Detailed Photogrammetry