Mapping the Path for a K or R Submission

Grants & Funding

When submitting a grant, sixteen weeks out is the magic number for starting the process. (For the NIH deadlines, this means start working at the deadline before your target: For the June deadline, early February is the optimal time to begin, and so on.) Understanding the path and having a clear timeline is key to a successful grant submission that doesn’t leave you wrung out.

Dr. Carol Lorenz, aka Durango Kid, provides everything you need to pace your winter grant to successful submission.  With a history of managing more than 70 grant submissions in a year, Dr. Lorenz is a PhD scientist with guaranteed reality-based expertise in project management. Over the course of seven blog entries, she breaks down the huge endeavor of a grant submission into manageable chunks, and shows you how to create a customized grant timeline to ensure the smooth completion of your application.

Part 1: #*@*! Plan is Not a Four-Letter Word

Why should you do this? How can it help you? Assignment: Decide on which grant you want to apply for.

Part 2: Planning to Plan: Gathering Materials

How should you start your plan? Assignment: Assemble supplies for creating a paper timeline.

Part 3: Can You (Really) Do Your Proposed Study?

How to assess feasibility so you don’t bite off more than you can chew (or put into one application). Assignment: Create a project proposal.

Part 4: Researchers: Start Your Timelines

How much time do you really have, and how can you best break it down? Assignment: Add submission date and units of time to your timeline. For February 2022, we suggest these dates.

Part 5: 500 Mile(stones)

Milestones can tell you if you’re progressing at the right pace toward your submission due date. Assignment: Note important dates and transfer them to your timeline.

Part 6: Buckets of Fun (Work?)

How to break your work into packages so you can then define specific tasks. Assignment: Create your buckets.

Part 7: What’s In Your Bucket(s)?

Now that you have your buckets, what do you put in them? Assignment: Brainstorm tasks and fill your buckets, then your timeline.

An example of a completed timeline (click for full size).

More Resources

Ten Insider Tips: What Your Grants Manager Wishes You Knew

More Things I Wish I’d Known Before I Wrote My K

Three (Grant) Peeves in a Pod: Write Better

Don’t Let Your Research Questions Go Out Without PICOTS

Doing Research / Grants & Funding

All the best aims are wearing PICOTS (pronounced “peacoats”). Specification of your PICOTS* is the minimum outerwear required to prevent your research question from being caught in a downpour of questions. Having these details tucked in gets you ready to have a meaningful conversation with colleagues, evaluate feasibility, brainstorm about how to get the best study done, and prepare to share your concept:

P = Population

I (E) = Intervention (Exposure)

C = Comparator

O = Outcome

T = Timing

S = Setting

Use PICOTS as a checklist for operationalizing research questions and probing how the research would shape up under different assumptions. Ask:

Population: 

  • What group of participants is ideal?
  • Whom does this imply we need to include or exclude?
  • How would we operationalize those criteria?
  • What influence will that have on ability to identify participants/recruit well?
  • Do we need to worry about proof of concept or generalizability more at this stage?

Intervention:

  • What will participants do/experience in the study that is being tested for its effects?
  • What dose, frequency, intensity will be tested?
  • Do we need to invoke a specific behavioral or causal model?

Or Exposure:

  • What is the behavior, biomarker, experience, metric for which we are interested in evaluating the effect?
  • How will it be measured?
  • How will we ensure quality of the measure?

Comparator:

  • What comparison provides the most relevant contrast (e.g. usual care, no intervention, placebo, etc)?
  • What analytic approach will best support the comparison?
  • Does this comparator help test our causal model or could it be stronger and more direct?

Outcome:

  • What is our measurable outcome?
  • How will measurement be operationalized?
  • Do we need primary and secondary outcomes?
  • Can we achieve adequate power to assess the outcome?
  • If there is loss to follow-up, do we have alternative ways of assessing outcomes?

Timing:

  • Over what time frame will participants be recruited?
  • What is the time period over which intervention will be conducted for an individual participant?
  • How long after completion of intervention will measures be collected?
  • When will outcomes measured? How wide is the tolerable window for measurement?

Setting:

  • Where will the research be conducted or participants be recruited (e.g. academic tertiary care center, network of health department clinics, community-based, etc.)?
  • What are the characteristics of that setting?
  • If extant data, what was the setting in which the data was developed?

Try it, you’ll like it. And it’s better than the alternative of getting soaked later by questions and requests for details needed to clarify your concept for the research.

Taking PICOTS for a Spin

For example, if you’re interested in asking: “Do community-based lifestyle interventions really work?” or “What determines who stays in community-based lifestyle interventions?” work the PICOTS:

Initial Question: “Do community-based lifestyle interventions really work?”

Goal: Pilot intervention study with a primary aim of determining if an intervention results in weight loss

In this case a pilot would be a typical approach for estimating the effect size, feasibility, participant satisfaction, loss to follow-up, and need for adjustments to inform design of a future definitive randomized trial. So we sketch a picture of what the study could look like:

P: Adult women with physician’s permission who are registered for the first session of the 12-week New Beginnings Program, and who speak English or Spanish.

I: Structured small group (n=5 to 8) coaching program with 1) specific weekly goal setting targets (eliminating sodas, understanding metabolic effects of exercise and tracking, counting carbohydrates, planning daily physical activity, enhancing sleep, writing an individual vision for one’s health, making a long term health contract with oneself, etc.) 2) three small group resistance and circuit training coached sessions each week, 3) social media peer connections, and 4) individualized exercise, diet and stress-reduction prescriptions.

C: Women who have applied for the program and are eligible but who are currently on the wait list with an anticipated wait time of 14 or more weeks.

O: Primary outcome will be weight loss, measured as difference between first measurement (in pounds to one decimal place on scale provided and calibrated by the study) at intake session and weight at the last group session. Outcomes will be grouped by completion status where completers attended ≥75% of schedule sessions and non-completers fewer sessions. Weight loss will also be described by group for each of the 12 weeks. For secular trend among those with an intention to lose weight the wait list comparison (secondary analysis) group weight will be collected from initial application (or as documented in physician’s permission letter in application) and weight at intake session, adjusted for elapsed time between application and start of program.

T: The intervention will last for 12 weeks of structured lifestyle and exercise coaching. Informal peer and social media networks established during the intervention will continue unsupervised after completion. Secondary outcome data will be collected at 3, 6, and 12 months after completion of the intervention.

S: Privately owned gym facility partnered with non-profit (501C3) to provide a comprehensive lifestyle intervention program to means-tested low income women, the majority of whom are age 40 and older, African American and weigh, on average, more than 200 pounds.

Initial Question: “What determines who stays in community-based lifestyle interventions?”

Goal: Observational study of whether baseline mental and physical health status, locus of control, and dispositional optimism are associated with completion of a community-based lifestyle intervention

P: Adult women with physician’s permission who are registered the 12-week New Beginnings Program which is a structured small group (n=5 to 8) coaching program with 1) specific weekly goal setting targets (eliminating sodas, understanding metabolic effects of exercise and tracking, counting carbohydrates, planning daily physical activity, enhancing sleep, writing an individual vision for one’s health, making a long term health contract with oneself, etc.) 2) three small group resistance and circuit training coached sessions each week, 3) social media peer connections, and 4) individualized exercise, diet and stress-reduction prescriptions.

E: Lower levels^ of physical and mental health as assessed by Short Form 36, lower self-efficacy (assessed by Generalized Self-efficacy Sale), and greater pessimism (assessed by the Revised Life Orientation Test) at baseline.

C: Higher levels^ of physical and mental health as assessed by Short Form 36, internal locus of control, and greater optimism at baseline incorporated into logistic regression models to assess association of characteristics with outcome.

^ Cut offs to be determined by distribution of traits in context of national normative reference data.

O: Program completers will be classified as those who attended ≥ 75% of scheduled sessions and non-completers fewer sessions. Will also capture week of attendance for secondary analysis in time-to-event analysis.

T: The assessment will be completed within 12 weeks.

S: Privately owned gym facility partnered with non-profit (501C3) to provide a comprehensive lifestyle intervention program to means-tested low income women, the majority of whom are age 40 and older, African American and weigh, on average, more than 200 pounds.

But I can’t possibly know these details when I first think the thought!?

True, but you can get much closer than you think. Start by daydreaming and then add parameters that are initially fantasy. The approach to shaping questions jumpstarts thinking that then leads to:

  • Productive generation and sifting of research ideas.
  • Greater focus for literature review.
  • Strategic thinking about multiple aspects of feasibility .
  • Weighing the best choices for measures of exposure, covariates, and outcomes.
  • Enhanced ability to rapidly gather input from others.

Related Posts:

Acing Your Observational Research Aims

All research proposals – grants, dissertations, internal funding – must ace the description of aims.  Many scientific questions are interesting.  Not all are useful.  You must persuade your readers that the proposed aims/hypotheses to be tested and the related analysis will fill gaps in scientific knowledge.

Don’t Crash on Approach

Getting the approach – the methods section of your grant –  fine-tuned is literally the heart of it all. You must land your science smoothly. Study section members know, and recent evidence confirms, your grant’s score is not an equal weighting of component scores. NIH criterion scores are for significance, innovation, approach, investigators, and environment.

* Gordon Guyatt initially described PICOTS in Guyatt G, Drummond R, Meade M, Cook D. The Evidence Based-Medicine Working Group Users’ Guides to the Medical Literature. 2nd edition. McGraw Hill; Chicago: 2008. Subsequently the framework became standard for formulating inclusion and exclusion criteria for conduct of systematic evidence reviews and meta-analyses of interventions.

Using NIH RePORTER to Find Your Guide

Grants & Funding

In much the same way the Assisted Referral Tool can help you pick a study section, the Program Official option for NIH’s Matchmaker tool provides insight into the Program Officer who works with the most projects that look like yours.

To use it, visit the Matchmaker portion of NIH RePORTER. (New to RePORTER? Here’s how to find out a wealth of information about grants on your campus and elsewhere.) In the text box, you can enter a paper or grant abstract, or any other text you want to search on up to 15,000 characters. Click the “Similar Program Officials” button.

In a few moments, the system will return a list of up to 175 program officers, starting with those whose grant portfolio most closely matches the text you put in. Click on a name to bring up a list of the active grants for whom that person is the PO. Are they in the same ballpark as your research? Then go back to your search results and grab the email address of that PO to start a conversation.

Why might you want to talk to your PO before submitting a grant? Several reasons:

  • To find out if the institute is enthusiastic about your research area or if there might be a better fit.
  • To get clarification on which study section is ideal.
  • To confirm the appropriate FOA.
  • To get suggestions for alternative programs, FOAs, or institutes.
  • If you have questions about your budget or scope of work.

Also of note: Matchmaker will give you graphs of how many POs at each institute work with grants that look like yours, as well as how many of which grant mechanisms (R01, R03, K08, etc.) they handle.

Additional Resources

Harness the Immense Power of Nosiness in NIH RePORTER

Which Study Section Should I Pick? Try the Assisted Referral Tool!

Video: How to Use NIH Matchmaker

Fresh Ideas for Writing Innovation in Your NIH Grants

Grants & Funding

NIH information for grant authors prompts researchers to ask these questions as they describe innovation:

  • Does the application challenge and seek to shift current research or clinical practice paradigms by utilizing novel theoretical concepts, approaches or methodologies, instrumentation, or interventions?
  • Are the concepts, approaches or methodologies, instrumentation, or interventions novel to one field of research or novel in a broad sense?
  • Is a refinement, improvement, or new application of theoretical concepts, approaches or methodologies, instrumentation, or interventions proposed?

[Acing Your Aims includes a checklist for whether your aims meet these goals.]

The translation of these answers into a grant section often falls flat in dense paragraphs of text. Consider these tips to produce a “novel” innovation section.

1) Quote NIH back to your reviewers and connect the dots.

Here’s a real example when told cohort methods are not innovative though no cohorts exist:

NIH evaluation criteria for innovation speak directly to the value of shifting “current research or clinical practice paradigms” using novel theoretical concepts, approaches or methodologies. Relatively neglected areas may be at a disadvantage if we don’t recognize the importance of laying the correct foundation. If a foundation is missing, as it is in research on fibroids and reproduction, then the novelty and value of a large, community-recruited prospective cohort is immense.

2) Bullet the key innovations to extract them from dense text format and better underscore the length of the list of new elements you are bringing to the science:

We will be the first to:

  • Translate the use of an oscillation overthruster into clinical use.
  • Create intermediate vector bosons from the annihilation of electrons.
  • Extend this annihilation to the electron antimatter counterpart, positron.
  • Travel through solid tumor matter.
  • Achieve pineal tumor destruction in the eighth dimension.
  • Disseminate this approach to guide research on other tumor types.
  • Return funding to NIH because we’re just that good.

Verbs help convey the action even for Buckaroo Banzai.

3) Cite or provide brief excerpts from prominent texts or guidance from professional organizations that currently rely on incomplete information or biased study designs and methods. Go cautiously but it can be done gently:

Trusted sources and text books continue to report an association with pregnancy loss and support potential myomectomy to reduce miscarriage risk, in the absence of rigorous scientific evidence.[refs – case must be made in significance] This cohort will provide the largest prospective cohort to address the association of fibroids with miscarriage and has the potential to challenge an existing paradigm and reduce unnecessary surgical intervention. [See how we slipped rigor in there?]

4) Note those calling for your research:

The 2020 vision statement of the Association to Cure Everything specifically calls for exploration of new dimensions as an approach to providing therapeutics through alternate realities.

As the RFA underscores, potential for gene-by-drug analyses to reduce harms is substantial.

5) Go wild and keep the reviewers’ attention with a quote or clinical vignette:

On disrupting dogma:

As Mark Twain described: “It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.”

On why we need answers:

John Whorfin was dead on arrival to the emergency room after spending $100k. [ref] He is among more than 2,000 cases of individuals harmed by illicit oscillation overthruster use this year. Cancer is a devastating diagnosis and the public will continue to pursue extreme options. Our purpose is to translate this cutting-edge technology into a viable, safe, and affordable clinical tool.

Pure bragging:

Latin for innovation means “to renew or change”. RFTS fibroids data is well along the path to changing how we understand the role of fibroids in pregnancy. The proposed expansion of the cohort will speed us along.

Remember you are marketing your ideas. Give your pitch to colleagues, family and friends until the innovation and value-proposition are clear in plain language.

Harness the Immense Power of Nosiness in NIH RePORTER

Grants & Funding

As a manager for our career development programs, many questions I get from trainees and faculty can actually be answered by using NIH RePORTER.  You can find out all kinds of nosy things like:

  • Who else on campus has the kind of grant I’m writing (so I can ask if they’d share a copy)?
  • Who else in the country does exactly the kind of science I’m doing?
  • How many grants does my NIH institute fund each year?
  • Which grants did the study section I’m aiming for fund in the last several years?
  • Who on campus has VA grants (VA grants are now part of RePORTER)?

(Note to trainees: Not that I mind answering your questions.  But it’s probably faster to search yourself.)

First, you’ll need to visit the site.  You’ll find a form where you can search by

  • PI name
  • Organization
  • City, State, or Congressional District
  • Project number (including parts of numbers, so for example you can retrieve a list of all R01s in the nation or at a particular institution)
  • NIH institute
  • Study section

And more.  At the top, you can also choose whether to search only for current (“active”) projects or grants whose funding period ended up to 20 years ago.

Say you’re writing one of those fancy new R61 grants for exploratory research and you want to know if anyone else on your campus has one.  You can find out!  Let’s take Vanderbilt for an example.  I’ll type “Vanderbilt” into the organization field, and “R61” into the application number field.

Guess what?  There is someone on campus who has one of these grants.

If I click on the title, I can see the grant abstract.  By using the “Details” tab at the top, I can view things like the project start and end date, funding for this fiscal year, study section and program officer.  The “Similar Projects” tab provides a list of NIH funded grants that have similar key words, while “Results” links to papers produced from work funded by the grant.

You can run this same kind of search by any of the other criteria on the form, including a text search that combs abstracts and key terms provided by the PI.  (You’ll want to be specific in your terms, though.  “Cancer” gets you 22,870 results.  Maybe you have time to sift through them all, but I sure don’t.)

Want RePORTER to read your mind, or at least your conference abstract?  Try the “Matchmaker” tab at the top of the search form.  You can paste in a chunk of text up to 15,000 characters—that’s pretty much a paper right there, but you can use abstracts, drafts of your aims, heck, maybe you’ve tweeted something you want to search for—and Matchmaker will analyze it for key terms and spit out the 100 most closely related projects, listing them in order from most to least similar.  HOLY COW.

If you go to the “Quick Links” tab at the top and choose “Funding Facts” or “NIH Data Book,” your brain will soon explode from all the funding data NIH is about to shove in it.

Want to know the success rate last year for K08s at any institute or the NIH as a whole?  The total amount of funding that was available for new R01’s in each of the last five years?  Funding Facts has all this and more, including info on F awards.

The Data Book will not only give you that information, but it gives it to you in GRAPHIC FORM.  For example, here’s the R01 success rate:

You can get it in table format by clicking on the “Data” tab.

The Data Book also has super-cool charts and figures on success rates and awards by new investigator status, gender, MD vs. PhD, and other criteria.

NIH RePORTER: Learn it, love it, use it.  Be nosy.  Be informed.

Acing Your Observational Research Aims

Grants & Funding

acing-research-aims

All research proposals – grants, dissertations, internal funding – must ace the description of aims.  Many scientific questions are interesting.  Not all are useful.  You must persuade your readers that the proposed aims/hypotheses to be tested and the related analysis will fill gaps in scientific knowledge.

Together with a thoughtful synthesis of the literature, this worksheet will help you determine if you can justify excitement about your aims for observational studies (cohorts, case-control, etc).  Interrogation of your aims will force you to clearly identify the claims you can make for exactly how you will be advancing the science if allowed to do the proposed research.

Be brutally honest with yourself in this evaluation. Your readers and reviewers are certain to be.  If you can’t defend at least one strong “Yes, I am using a superior approach to get this answer” per aim, re-think the aim. You may be lost in the land of incremental contributions and not distinctive progress.

Once you have a grid documenting powerful aims, this approach will help you tell others, in an organized way, why doing the proposed research is important, has significance in your field, and will bring innovative contributions into play. They’ll see you are on the path to discovery.

Don’t Crash on Approach

Grants & Funding

Getting the approach – the methods section of your grant –  fine-tuned is literally the heart of it all. You must land your science smoothly. Study section members know, and recent evidence confirms, your grant’s score is not an equal weighting of component scores. NIH criterion scores are for significance, innovation, approach, investigators, and environment.

No surprises here, approach has the highest weight. Reviewers care most if the scientific methods in are sound. For studies with human participants from case-cohort studies to clinical trials you must implement this flight checklist:

  • Brief overview of the study design/population (repeated as necessary if this changes across aims).
  • Summary/figure detailing the timing and sequence of data collection including biological specimens, interview data, exposure measures, and outcomes.
  • Succinct summary of inclusion and exclusion criteria for participants (and if needed the larger study from which participants are identified).
  • Flow diagram indicating how many individuals were, or are estimated to be, excluded. Provide reasons if you have an extant cohort.
  • Clear estimates or exact numbers (better) of how many individuals will be available or recruited for analysis in each aim.
  • Operational definitions for: 1) Main exposure/intervention; 2) Primary and secondary outcomes; and 3) Key candidate confounders
  • Text introducing measures in a logical order (e.g., order that data is collected or order of relevance to aims).
  • Summary of general data quality assessment (e.g., logic checks) and data cleaning steps.*
  • Information about how missing or incomplete data will be handled.*
  • Details of quality control approach for any measures (labs, surveys, etc.).*
  • Description of analytic approach including data preparation, models to be used, and how choices will be made for any analysis of effect modification and confounding for each aim, if applicable.*
  • Methods for how you will check for and handle any violations of model assumptions.
  • Specific delineation between primary analyses and secondary analyses.
  • Power calculations supplemented with a table or figure.
  • Summary of potential challenges and solutions if they are encountered.
  • Timeline for completion of the work.
  • Conclusion/summary of the strength of the approach with a final pitch covering why the science is innovative.

Work the checklist. The glidepath provided by crisp and clear operational details will bring you in. A sound approach is required for a smooth landing.

* These items, in part, speak to the requirement to describe what aspects support rigor and reproducibility.

Finally! Data on What Study Section Really Cares About

Grants & Funding

In 2009, NIH revamped their scoring system asking reviewers to provide numbers ranging from 1 (best) to 9 (worst) assessing applications Environment, Investigator, Innovation, Approach, and Significance.

highlight-significanceNIH has emphasized Innovation (insert jazz hands), leaving many a weary grant writer to feel a need to invent fabulous new techniques to take DNA out of things, put it back in, and take it back out another time to reassure study sections that the gene you are studying does the thing you thought it would. And if you can do that in a nano platform with a high throughput screen, all the better.
It takes only a brief gander at NIH’s instructions to authors to reinforce the need for extra technological bedazzlement. It’s right there in big letters.

“Highlight Significance and Innovation”

It turns out that strategy may not be all it’s cracked up to be. PLoS One published a study by Eblen et al. evaluated over 70,000 applications looking at what metrics best-predicted funding success. Innovation and Significance were NOT the winners. Approach was.

impact-score

Yes, that entirely unglamorous doing a project the right way, asking smart questions, and using robust design correlated far better with success than other metrics.

Several questions leap to mind including. Why didn’t NIH do this analysis earlier? It seems that they’ve been directing folks to the wrong area to emphasize. Either that or study sections are going rogue. And, here’s a vexing one, are we so precious that we all have to get 1’s, 2’s and 3’s for Investigators and Institutions? I don’t love statistics, but if everyone scores above average, doesn’t that mean we are all average or the space-time continuum is going to implode or something?

Read the paper. It’s pretty impressive and an excellent reason to slow down, think harder and make sure your study section is clear that not only is your question timely and relevant, but that you are doing it in a thoughtful and thorough manner.

Figure from PLoS One How Criterion Scores Predict the Overall Impact Score and Funding Outcomes for National Institutes of Health Peer-Reviewed Applications Matthew K. Eblen, Robin M. Wagner, Deepshikha RoyChowdhury, Katherine C. Patel, Katrina Pearson