Carol Reiley
Carol Reiley | Courtesy of SF Symphony

There is nobody quite like Carol Reiley: serial entrepreneur, computer scientist, and artificial intelligence roboticist. Indeed, this pioneer in teleoperated and autonomous robot systems — in such varied applications as surgery, space exploration, disaster rescue, and yes, even self-driving cars — is also a San Francisco Symphony collaborative partner, one of the group of eight artists, thinkers, and doers that Music Director Esa-Pekka Salonen put together when he joined the orchestra. This innovative initiative has been making waves since composer Nico Muhly’s COVID-era streamed concert, Throughline, premiered in 2020.

Reiley’s program, “Press Play: Carol Reiley and the Robots,” which will be presented April 5–6, is the latest offering in the SF Symphony’s SoundBox series. Billed as a “human-centered experience with audience participation to create pieces of art,” the evening features classical music as seen through the lens of a roboticist and is meant to inspire musical collaborations between humans and robots. Naturally, the human factor includes a number of musicians from the SF Symphony.

Carol Reiley
Carol Reiley | Courtesy of SF Symphony

At the cutting edge of today’s technological advancements, Reiley, who was born in Flint, Michigan, in 1982, received a degree in computer engineering from Santa Clara University in 2004. Concentrating on robotics research, she then earned a master’s degree in computer science from Johns Hopkins University, where she specialized in haptics, the science of touch feedback. Reiley then enrolled in a doctoral program at Stanford University but dropped out in order to pursue her myriad ideas for startups.

The mogul, whose nickname is “Mother of Robots,” is the first female engineer to appear on the cover of Make magazine, posing in a Superman-like T-shirt. She had previously been featured in Make for co-creating, with Robert Armiger, the Air Guitar Hero project, a game designed to offer rehabilitation exercises for amputees by allowing players to control the game with electrical signals from their arm muscles.

In other words, the San Jose-based Reiley may be using technology to advance health care, having worked at Intuitive Surgical and founded Tinkerbelle Labs (which is focused on designing affordable medical tools), but music does come into her tech-savvy, groundbreaking equation, too. In 2020, the mother of two founded deepMusic.ai with violin superstar Hilary Hahn. Its mission, according to its website, is to shape “the future of music to figure out how and where AI can assist creatives and utilize human strengths to build something novel.”

SF Classical Voice had the opportunity to speak with Reiley on a range of topics, including her upcoming SoundBox program and how AI can help artists be more creative.

When did you first get interested in robotics, and how did you get the nickname “Mother of Robots”?

I feel like it might have been as an undergrad in my freshman year [that I got interested]. I had a fellowship that covered tuition and didn’t know anything about coding or robots, [so] I knocked on a lab door and was willing to do whatever. The reason I fell in love with engineering was [realizing that] tech can help people and impact lives.

[As for being] nicknamed, it’s fun. It also alludes to the fact that I really love taking care of robots, fixing them, helping them. And now that I work on live performances, I’m like a nervous stage mom on the side.

Collaborative partners
Esa-Pekka Salonen and SF Symphony collaborative partners

Speaking of performances, what was the genesis of your being a collaborative partner of the SF Symphony? You’re in the company of, among others, composer and pianist Nicholas Britell, soprano Julia Bullock, and composer and guitarist Bryce Dessner.

Esa-Pekka contacted me because he wanted someone as a creative partner that was really strong in AI. I had to be convinced because I had a little bit of imposter syndrome [and said], “I’m not sure.” My first meeting with him was interesting because I come from a world of presentations [and] came with a bunch of papers and said, “Here’s my five-year plan. These are my ideas. What’s the problem?”

I felt like I got laughed at. But he was like, “I love your energy, your playfulness. You’d be perfect.” I reluctantly signed on in 2018–2019, [and] we were going to do a collaborative partners conference, but COVID shut us down. And especially with a tech project, it would [have been] hard to be remote.

For the Symphony, I [was] creating something using OpenAI [a U.S.-based artificial intelligence research organization]. Around that time, questions were starting to spring to my mind. If your job is in a creative field, are you safe from AI? As engineers, we would say, pre-2022, “If you do manual labor, [such as] moving boxes for shipping companies, your job could be automated by a robot. The thing that AI can’t do is creative work. If you are a writer or an artist, your job is probably safe.” I started not believing that [and] wanted to get ahead of it.

So, in 2020, you launched deepMusic.ai with Hilary Hahn in order to pair AI scientists with world-class artists, including Pulitzer Prize-winning composers David Lang and Michael Abels and Grammy Award-winning, composer, multi-instrumentalist, and producer Dana Leong. How did this come about, and what was your mission?

I called Hilary. We’ve been friends for a long time, and I wanted to pick her brain to ask, “How would tech impact your life?” We’re both well respected in our different fields that rarely cross. We talked for two hours in the rain about [many] topics, [including] the self-driving car space [and] how fast and disruptive that was. We thought, “What would the future look like in music, and can we do something about it?” That’s how deepMusic.ai got started.

And then?

We wanted to work with top-notch human composers to get their take on where the world of AI is going. I felt it was important to have their voices integrated into the company, their artistic intuition. So often what happens is you have the story of a Silicon Valley tech guy; he’s in a garage and builds something cool, but it’s kind of an echo chamber.

To give context, there were eight of us who co-founded Drive.ai — self-driving cars. Eight of us could build a car and [show] how few people it takes to disrupt an entire industry. What could eight people do to disrupt the world of music or [a field] in the arts? I saw how powerful the tools were. Quite a few were springing up, [and] we wanted to pair [them with] human composers and understand the nuance behind creating art.

We were tiny at that time. The [composers] connected with three different engineers, and Michael Abels wrote a five-page paper on all the things he noted about the tool and how he [and his colleagues] used it to compose, what limitations they all felt, and what they wanted to fix. That was the first experiment we did with deepMusic.ai, in 2020–2021.

SoundBox
SF Symphony’s SoundBox | Credit: Stefan Cohen

What will audiences see when they attend SoundBox April 5–6?

This is a culmination of a lot of different things, but it’s a showcase of AI specifically. From my vantage point as a roboticist, entrepreneur, and artist, [it’s] what I think is cutting edge that has exploded in terms of AI and art. What I’m trying to get at is human-driven. It’s interactive. What you’ll see are many types of pieces [through] as many different lenses as possible. My goal has always been to augment human creativity.

You will not see a robot conducting a symphony or playing an instrument, but what you will see will be different forms of art and members of the SF Symphony playing. The things I’m showing have not been done on any stage yet; this is pioneering work. There are 18 chamber music and solo pieces. I’ll introduce them, and there’s a reason why I chose each piece. We’ll [also] have, hopefully, a different perspective on the way an audience member can experience a piece.

There are three different sections, with music by composers including Philip Glass, Antonio Vivaldi, and George Gershwin?

Yes. It’s broken up into three sections, and there’s a strong mix. The first is about human creativity, [and] we’ll have [some J.S.] Bach. There’s Steve Reich, [and] I did want to have [Béla] Bartók. There’s a company, Emotiv [which explores brain technology]. It’s not my company; it’s a collaborator that’s been around since 2013. One musician is going to listen to music, and you will see his brain on tape with an EEG [electroencephalogram] helmet.

[Another piece is from] Richard Reed Parry. He’s in Arcade Fire, and we’re revamping his “Duet for Heart and Breath” from an album from 10 years ago. It’s about the nonpassive — the heartbeat — and the breath. He had musicians wear stethoscopes, and they were conducted by heartbeats. I added the brain, so it feels like The Wizard of Oz, a gathering of the heart, the brain, the breath. That’s an example of how I approach this, how it’s an unexpected way to use technology: by enhancing [music] in a way we couldn’t do alone.

OK, so how can AI help artists be more creative and free them up somewhat?

It’s going to be a very interesting next few years. We saw the [Hollywood] writers’ strike. It’s inevitable that jobs are going to change from the way they’re currently being done. As a creator, I think AI allows for more output. You can be more productive and generate more things than you could if [you had to realize every] sketch in your mind [in order to create] an artwork.

I would like to help humans be more creative. I’m interested in the arts because I think that is the beauty of humanity. I believe the magic is what makes us human. It’s not your heart, your brain — it’s your creativity. I would like each one of us, whether you’re a famous artist [or not], to utilize these tools. It’s going to be very disruptive and similar to how the computer has changed how we work.

It will help us. You don’t have to be as skilled anymore; we’re not looking for extra precision. Machines can do that better, but what we can do is start to see things on a broader spectrum. There’s a capacity in our heads for a certain number of composers, but if you’re able to search through 200 years of history and get different chord combinations, these patterns can be suggested to you, but you’re in control [with] what you decide to do. What I would use tech to do [would be something that] hadn’t been done before.

That seems both scary and exciting. Speaking of the future, where do you see yourself in the next five to 10 years?

Doing something completely different. I feel like tech and robotics allows me to wear different skins — I call them chapters. But I always wanted to work on things that are big, big problems. My goal is to help people by bringing robots out into the world. It’ll be that, but I don’t know what application. I do think the arts and creativity are the most complicated thing about being human, [and] I’m planning to be in this space for quite some time.