Biomarkers in early cancer drug development: limited utility.
Review
Overview
abstract
Sponsors bringing a novel antineoplastic agent into human testing are faced with a slew of decisions regarding dosing, schedule, and regulatory path (e.g., indications, patient population, trial design, and end points), often with a meager scientific foundation on which to base these critical decisions. Thus, the cost (in both time and money) to bring innovative small-molecule drugs or biologics to cancer patients has been enormous, with only a handful of approvals per year despite the more than 700 drugs in clinical development and the more than 1,300 in preclinical testing.(1) A biomarker, defined as "a characteristic that is objectively measured and evaluated as an indicator of normal biologic processes, pathogenic processes, or pharmacologic responses to a therapeutic intervention,"(2) has intuitive appeal as a means of shortcutting drug development processes largely through optimizing the drug's dose and schedule (pharmacodynamic biomarkers), selecting patients most likely to benefit from the therapeutic intervention (predictive biomarkers), and acting as a substitute for a true clinical outcome (outcome biomarkers or surrogate end points).(3) However, the enthusiasm for widespread adoption of biomarker studies in early drug development is unjustified because of statistical and cost considerations, as well as a lack of historic evidence for their usefulness.