I’ve been reviewing a lot of grants lately and have some advice on things that negatively impact scores on proposals for behavioral intervention studies. A thread! 1/12
The intervention is not based on any conceptual/theoretical model. No discussion or testing of any processes of change. In other words, investigator doesn’t seem to know how or why intervention will impact the proposed outcomes. 2/12
The intervention is not informed by past intervention research in the topic or related topics. In other words, it seems made up. Think of your study as the next chapter in a book, it has to logically follow the previous chapters. 3/12
Efficacy is being proposed as the primary endpoint in a feasibility study. 5/12
No content expert on the team or no behavioral intervention expert on the team. 8/12
The intervention is technology-based but investigator team lacks any computer science, engineering, or other technology expert (consultant not enough, student not enough). 9/12
The sample size estimation section lacks enough detail for reviewers to evaluate (and no references). For ex: ‘A sample of 60 in our three arm trial is adequate to detect a medium effect size with 20% attrition.” Say whaa? 10/12
“Me too!” study---applying an already-used model to a new population segment/new topic and equating that to innovation. Since we don’t have the $ to test every model on every population segment, the study has to extend the literature more than this. 11/12
A resubmission of a proposal that originally got a middle of the pack score but is virtually identical to original submission because investigator argues majority of critiques instead of amending the application. 12/12
Would love to hear other's advice as well! 13/12 :)
You can follow @DrSherryPagoto.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.