[music playing]

Oby: From the campus of Harvard Medical School, This is ThinkResearch, a podcast devoted to the stories behind clinical research. I'm Oby.

Brendan: And I'm Brendan, and we are your hosts. ThinkResearch is brought to you by Harvard Catalyst, Harvard University's Clinical and Translational Science Center.

Oby: And by NCATS-- the National Center for Advancing Translational Sciences.

[music playing]

Brendan: Pilot grants provide critical early funding to investigators and rely on a complex review process for approval. Because these grants are often very specialized, it can be difficult to find enough reviewers with the right expertise to evaluate proposals. At the University of California, Irvine, Dr. Margaret Schneider's team realized the need to expand their network for the review process. Then, she discovered universities across the nation shared a similar vision.

On this episode of ThinkResearch, Dr. Hardeep Ranu, of the Harvard Catalyst Translational Innovator, and Dr. Schneider discuss their ongoing efforts with the CTSA External Reviewer Exchange Consortium.

Dr. Ranu is a Project Manager for the Translational Innovator at Harvard Catalyst, and Dr. Schneider is the Director of the Pilot Grant Program at the University of California, Irvine.

Hardeep Ranu: Name is Hardeep Ranu and I am a Project Manager in the Translational Innovator Program for Harvard Catalyst. And today, I am talking to Margaret Schneider, who is the Director of the Pilot Studies Program at the University of California, Irvine. So Margaret, welcome.

Margaret Schneider: Thank you. It's nice to see you, Hardeep.

Hardeep Ranu: You, too. So first of all, I think I'll start with why we're talking-- why we think it's really relevant to have this information what we are doing out there, because I don't think many people know about it. And I wonder if you want to start off and tell us what CEREC is and how it started.

Margaret Schneider: Sure, absolutely. So CEREC is the CTSA External Reviewer Exchange Consortium. And it exists to facilitate the exchange of scientific reviews for funding applications that come in to a group of hubs-- which is what we call the different locations where clinical translational science awards are managed. And the point of CEREC is to find the best reviewers possible for each application that comes in to each of our participating members so that we can select the best science to support.

Each of our hubs, each of the CTSA hubs has a Pilot Studies Program that gives small amounts of funding towards early-stage scientific investigations. And a number of years ago when we first started our CTSA hub, we wanted to find out how other CTSA hubs were running their pilot study programs because it was a new endeavor for us and we wanted to find out best practices. So we did a survey of the whole CTSA network, and as part of that survey we asked, are you interested in exchanging reviewers?

So the members of CEREC are the people who responded to that survey by saying, yes, we'd be interested in exchanging reviewers. And that was-- I think almost over four years ago and the rest is kind of history.

Hardeep Ranu: And then, do you want to just tell us who the members are and the fact that we're all around the country?

Margaret Schneider: Sure. So I'll start listing them and then you jump in because I don't want to leave anybody out, but obviously, Harvard is one member of the Consortium. I'm at University of California, Irvine, so we're another hub. Other hubs include Ohio State University, the University of Washington in Seattle, Medical College of Wisconsin, University of Southern California. So we have two, actually, in California.

We have the Virginia Commonwealth University and-- help me out, Hardeep. Who am I leaving out?

Hardeep Ranu: University of Alabama, Birmingham, and Arkansas.

Margaret Schneider: Right, University of Arkansas, Medical Science. So there's nine of us, altogether.

Hardeep Ranu: Yeah, there's nine of us. And well, you've already spoken to why we need a review exchange. I mean, do you think that it should be kind of explored with not just with a pilot program, but with the other programs that the CTSAs also run?

Margaret Schneider: Yeah, so you know, one of the reasons we really wanted to find an exchange consortium to be part of, is that we're a smaller university. And so when we get an application for a particular project to be funded, it's highly likely that we won't be able to find three excellent expert reviewers within our own institution who are not also in conflict with the proposal that we're reviewing. And I think that's true for any funding program within a university setting-- that it's going to be hard to find really well-qualified reviewers who don't have some affiliation with the person who's applying for funding.

So yeah, I think it's a good practice to explore across all funding mechanisms.

Hardeep Ranu: And there was something that came up while you were speaking that I know that there are-- and we've come across it a few times-- there's always a proposal that is difficult to find to review for everyone. And that's something that's kind of challenging. And I'm going to put you on the spot here, because I'm going to say, do you have any examples of that that you can think of even from the UCI side?

Margaret Schneider: Yeah, well I don't remember which institution had this proposal posted-- I don't think it was UCI, but there was a proposal that we had on our exchange consortium that had to do with using animals-- comfort animals in hospitals to promote well-being of patients. You know, a highly specialized topic. And it was very hard to locate three expert reviewers who could provide really informative input into that application.

Hardeep Ranu: Right, right. And I think that kind of application also the search terms become really difficult to sort of narrow down, as well, I think.

Margaret Schneider: Yeah. Another example that we've had here, which actually has not been posted to CEREC because it wasn't part of our pilot programs, but we have an investigator here who does research on the use of robotic avatars in a school setting for students-- like kids who have chronic illness and cannot attend school in person. And this was long before the COVID era, but she was doing research on using these mobile robot avatars in the school setting and how that impacted the kids psychological well-being to be able to virtually participate at school.

We have had an incredibly hard time and we have looked internationally to find the right reviewers for these studies because it's so innovative and so unusual.

Hardeep Ranu: Yeah, yeah, I can imagine-- although it does sound like really good research that's going on. I obviously know how CEREC works operationally and I think, actually, it works really well operationally the way in which everybody works together. And I was wondering if you wanted to talk a bit more about the operational aspects of it, because I think that's something that is kind of interesting in how it came about because I don't know if it was easy for that to get set up.

Margaret Schneider: Right, and you've been with us from the beginning so you know what we went through to get to where we are. And I think one of the reasons it works so well now is that we developed it organically out of our needs. And when we first started, we really didn't have a set idea of what the process would look like.

So we had to experiment with a number of approaches. And what we ended up doing was creating an online portal that each of us can access and the requesting hub posts their abstracts there. So whenever they have a call for funding, and they are looking for external reviewers, they will post the abstracts of those studies to our online portal. And then the providing sites can view and download that information and use it to send out invitations to reviewers at their own site.

So I think a couple of the most important features are that we can all see the same information by going to this online portal. So we always know the status of a particular request. We know how many reviewers have been invited. We know how many reviewers have accepted. But the review process, itself, is decentralized.

So we sort of explored in the early years the idea of standardizing our review process of trying to all use the same forms and the same scoring systems and we quickly discarded that idea, because each hub has its own personality and things change very quickly. So it would have been very hard to keep those processes synchronized.

So I think that's something that people find hard to understand when they first start learning about CEREC is this simultaneously-coordinated but decentralized review process.

Hardeep Ranu: Right. And I also think with the updates that we've made to CEREC Central whether they are posted really good in that now we can see who's already sent an email to someone, you know, so that we don't have to go, OK, I've already done it and then somebody finds someone-- so that we can at least go in there and focus on the ones that don't have-- nobody sent a review to.

Margaret Schneider: Right.

Hardeep Ranu: Yeah, I think that part is so helpful now that we have it. We've had a face-to-face meeting, and when I went to that face-to-face meeting, what I found with it was it was incredibly productive and collaborative. And I think that how it worked was just really a reflection of conference calls, because then we were just doing conference calls. We weren't even having Zoom meetings or couldn't even see each other. So I think that was-- I think it was really helpful to have had that and so that now we can actually reach out to each other, and asking for help on best practices and all those kinds of things. What was your takeaway from the face-to-face meeting.

Margaret Schneider: I totally agree. I mean, I think we had been working together for almost a year and a half before we finally met. And as you say we didn't even you Zoom back then so all we knew were the voices. And I think meeting together for a couple of days in Southern California was not only really fun, but a really important stage in our development as a collaborative.

And as you alluded to, now there's a lot of back-and-forth between the members, even outside of running the reviews because each of us is constantly running into new challenges in running our programs and it's so useful to be able to reach out to our friends and collaborators within the Consortium and ask, how do you deal with this? And get some ideas about how to solve our problems.

Hardeep Ranu: Yeah, it makes me feel that we're not the only ones having this problem when you hear back from people and that it's sort of OK, does is that reassuring? Does that normalize what's going on for us? But at least we have that information there for us.

Margaret Schneider: It's so interesting, because right now we're in the process of replicating our CEREC to a new consortium. And watching them go through the same stages of development that we did is a little bit torturous, you know, you want to kind of accelerate the process. You want to say, yeah, we did that, we had to learn that, but you know, here's how you can overcome that.

But I think that they need to experience it in the same evolutionary sense that we did. And one of those stages, I think, was that in the beginning our members were not really sure whether the time invested in the CEREC would be worth it, in terms of what they're going to get out of it. Because we're all working at capacity and our jobs, don't have spare time to be providing assistance to somebody at another site. But the return on that investment has been so spectacular that I think we've all stopped worrying about whether this is a valuable use of our time.

Hardeep Ranu: Yeah, exactly. Exactly. And I want to switch gears a bit and talk about things from a reviewer aspect. What do you think they get out of it? I know that there have been times when I've seen an application and I've been like, oh, I know someone who's really good for that. And it would be really good for them to have reviewed for an external institution so that they can put it on their CV. But what do you think of why the other people do it-- the full professors who don't really need to add an external review to their CV?

Margaret Schneider: Right. And obviously, that's critical. Because if we don't get reviewers saying yes, then our whole system collapses. And they're not being paid to do it. It's entirely volunteer. As you just said, a lot of academics do this kind of reviewing routinely because it's part of their professional service. And each time they come up for promotion, they have to fill in that box on their dossier and explain what they've done for the professional community.

And then, there are a lot of people who really are interested in knowing what science is going on at other institutions. You know, everybody is so busy all the time getting their own work done, that sometimes they don't have time to lift up their heads and find out what's going on at other places. And so I think a lot of the senior people are interested to find out what kind of science in their field is being done at some of these other institutions.

Hardeep Ranu: As you're saying that, I remembered that I had seen an application and it looked so interesting that I was like, I'm going to find a reviewer for that because I actually want to look at the application-- because I was so interested in the science. I was like, oh this is sort of different and I want to see where things are in that particular field.

So I know I did it for myself.

Margaret Schneider: Yeah, exactly. And the science that comes through this consortium is so diverse and so cutting-edge that it is exciting to see what ideas are out there. And as you know, it ranges across the continuum from very basic scientific inquiries to really applied scientific inquiries. So it's just a huge body of innovation that comes through our consortium.

Hardeep Ranu: De-identified proposals-- I know that's something that we do or we start to implement, and we've got some feedback saying, this is great and other feedback saying, why are you doing this to me? I need to have all that information that you should be supplying.

Margaret Schneider: Yeah, that that is a really interesting question. And I think it's one that NIH is also struggling with. They've explored the possibility of doing de-identified applications. We don't do that in CEREC, and one of the reasons we don't do it, I think, is that it's a lot of work to de-identify applications and to do it effectively. And as I already mentioned, we're all working at capacity. So I don't think we're looking for new ways of spending our time.

But another reason that I would probably be hesitant to go in that direction is that right now, when we send an invitation to a reviewer, we ask them to decline if they think they might be in conflict with that proposal. And one way they can determine that is they look at the name of the principal investigator and say, do I know this person? Right?

If you remove that information, the reviewer might say yes and then 3/4 of the way through the application, they might become aware that they know this researcher. And at that point, they might have to say, I'm in conflict I can't review it. And at that point, you're so far down the timeline, in terms of when the review is due, that we might not be able to find another reviewer. So that that's kind of why I would be hesitant to go in the direction of de-indentification.

Hardeep Ranu: Yeah, I mean when we have been doing that we have definitely come up against that. Somebody has said you know, I took a look at this, and I know who it is. I'm familiar with their work. And sometimes, they say I cannot review this. And other times I offer them the opportunity to complete the review but saying can you do this independently and without taking that part-- the fact that you know into account?

And you know probably 75% of the time they can say they can review. In terms of what you were saying about the work involved in de-identifying the proposals, that's my job when these proposals come in. And yes, it is not an insignificant amount of time to do that.

And if you have 50, 60 applications come in, it gets pretty time-consuming.

Margaret Schneider: I can imagine.

Hardeep Ranu: So Margaret, I've sort of touched on this, as well-- we've touched on it and in our conversation that what do you think is the value of CEREC? And by that I mean, in terms of the CTSAs, as a whole? Because I know that we have started to help other institutions build out their version of CEREC, I think we've been calling it CEREC-2. So I wonder what the selling point could be or is, really, of this kind of collaboration?

Margaret Schneider: Right. So the CTSA program is administered by NIH within an institute called NCATS. And the mission of NCATS is to accelerate the translation of basic science discoveries into applications that will impact human health. And so the pilot studies programs are part of that mission. And what we are seeking to do is to put some resources behind the most promising research that is most likely to be translated into eventual interventions or programs that are going to impact human health.

So we have to have a way of identifying the most promising science. And so that's where CEREC adds its value, I think, is that by enhancing the rigor of the review process we increase our chances of finding those studies that are most likely to result in a benefit to human health.

And the other aspect of it is that when NCATS formed the CTSA network, they were very upfront about their intention to demonstrate that we could accomplish more by collaborating than by competing. And so the NCATS encourages the difference CTSA hubs to work together and exchange resources, exchange ideas, create some synergy so that basically the whole is greater than the sum of its parts.

And I think CEREC has been a really good model of how that can happen-- where multiple hubs come together and as a result, we sort of improve the system for all of us, instead of each of us working in isolation and competing with one another.

Hardeep Ranu: Yeah. I think that's a really great point that you make there. Science, in general, could really benefit from this kind of external review process, I think. I mean, I also think it could benefit from de-identifying proposals, as well, considering the biases that are out there. And also, I think what I have seen, certainly in the pilot grant applications that we've received and funded, and also the ones that I've seen that our other partners have posted to CEREC Central is that what we're looking for are those proposals, which are always sort of slightly new or different and that's how the innovation comes about.

And I also think that for us-- I can only speak about us, but there's a high risk in funding something that doesn't have a whole load of data behind it. But there's also a really high reward to those types of projects. Is that something you've seen, as well?

Margaret Schneider: Yeah, and we actually use that phrase, high risk, high reward, when we are advertising our funding opportunities. Because that is what we're looking for. And so we realize that some of these studies are probably going to fail and not yield anything, but some of them will really succeed in a big way.

And again, having the external scientific review just gives us a better chance of identifying out of the sea of proposals that we all receive, the few gems that might really have a big impact on the well-being of our community.

Hardeep Ranu: Yeah, exactly. Exactly. Going back to CEREC, and you know, the origins, et cetera, how do you think it's evolved over the past few years?

Margaret Schneider: Well, so we've evolved both technologically and socially, I think.

Hardeep Ranu: Yeah, I thought about that. That was what came to mind, straightaway. It was like, we started off having conference calls where I was like using a telephone to now using Zoom. At least, that is one way in which we have evolved.

Margaret Schneider: Absolutely. And our online portal has evolved. We now have CEREC Central 2.0. We went through sort of a refresh midway where we enhanced certain capacities. So one thing that we have on our CEREC Central Online Portal is that we automatically track our statistics.

And we can look at any time at the status of the whole consortium-- how many reviews have been provided? How many reviewers have been contacted? What's our yield rate? A whole slew of statistics that helps us monitor our program and also helps us in our reporting to our local leadership about the value that this consortium brings to each of us.

And then socially, I think it's been a really important process of learning to trust one another and learning to trust in the system. Early on, when it was still new, there was a high level of anxiety every time one of us posted a call to the site because you never really knew if the partners were going to step up. And even if they tried, maybe they wouldn't be able to find a reviewer. But over time, we've learned that it does work and we all have such a personal investment in it that we've got each other's backs.

And I think that the anxiety level has gone down as the trust experience has gone up.

Hardeep Ranu: Yeah, yeah. Those are great points. I think we're close to wrapping up and I was just wondering what you thought, what your takeaways are, operationally-speaking, technically-speaking, in terms of the proposals, but if there are any other takeaways you would have that you wouldn't necessarily have thought of.

Margaret Schneider: I think that the biggest takeaway for me is that I consider the fact that it's essentially self-perpetuating at this point as the biggest evidence that this was a good idea. So again, we're about four years on into when we started this, and I feel that the Consortium is stable. We've had turnover at individual hubs, in terms of the personnel that are involved and yet, CEREC continues to survive. And as I mentioned, we're now cloning ourselves into a CEREC 2, too, which seems to be taking off successfully.

So I think the take away would just be the extreme value in forming these kinds of partnerships-- really at the administrative level of these different research hubs, which I think tends to be overlooked sometimes. The administrators are the ones that are really the machine that keeps this all working. And we've worked hard over the years to form investigator collaborators across the different CTSAs. But I think it's of equal value to create administrative collaborations across the hubs.

Hardeep Ranu: Yeah, I think that's a great about the administrative side being so important, because there are many things that we've come across where it makes sense to talk to another administrator rather than to a faculty member and get that kind of information that can help us. And one of the takeaways that I have found, and it was at that the face-to-face meeting that we had was how it's not just the science that we are looking for to be innovative, it's also our individual CTSAs and the pilot programs and all sorts of different things that have to be innovative, as well. What are we doing? What are we looking for? How are we capturing that information? I think that's something that I've learned that it's not just the science, it's the background to the science.

Margaret Schneider: Absolutely. I think it's what the NCATS Director would call the science of translation.

Hardeep Ranu: Yeah, yeah, exactly. Exactly. Thank you, Margaret, for taking the time. And I'm very envious of the fact that you are in Southern California.

Margaret Schneider: Thank you, Hardeep. This is fun.

Hardeep Ranu: Yeah, yeah. It was great fun.

[music playing]

Brendan: Thank you for listening. If you've enjoyed this podcast, please rate us on iTunes and help us spread the word about the amazing research taking place across the Harvard community.

Oby: To learn more about the guests on this episode, visit our website Catalyst.Harvard.edu/ThinkResearch.