06Jun

Clinical research associates in the US are leaving jobs at the rate of 30% a year, says a new report from the consulting, tax and advisory firm, BDO.

Where turnover rates for these clinical monitoring professionals had been holding steady at 25%, the rate jumped between 2017 and 2018 by 4%. Outside the US, the global average is a modest 16%.

Turnover has plagued clinical / contract research organizations (CROs) for several years, but the increase in the US promises to heighten the problem. The BDO report says the impact to a CRO can be severe: “Losses of team members can disrupt clinical trials, and ultimately damage the relationship with the trial sponsor. High levels of turnover may deter sponsors from engaging in a strategic partnership with a contract research organization.”

There are multiple causes behind the increasing CRA turnover, though compensation and competition for these professionals is at the top of the list.

“If CROs hope to retain key talent, they must do a better job of linking pay raises to an employee’s level of contribution and re-assess merit budget increases,” said Judy Canavan, Global Employer Services Managing Director at BDO. “Competency models can help companies quantify this linkage.”

BDO’s analysis found CRA compensation levels remained “largely unchanged during the last five years” while CRAs have significantly increased their skills relative to their rate of pay. Likewise, annual incentive programs, a tool to attract and retain talent, haven’t changed much in the last five years. Payouts as a percentage of salary, have actually decreased, the report says.

“Quite simply, says Canavan, “Companies need to link the size of the raise to the increase in an employee’s contribution. This may mean increasing the size of the merit budget. Utilizing a competency model can help companies quantify this linkage.”

[bdp_post_carousel]

Jun 6, 2023

Is It the Drug Or Fitbit Making the Difference?

As if clinical researchers and managers didn’t already have enough to worry about, now add activity trackers to the list.

Smartwatches, Fitbits and similar trackers have the potential to influence behavior, which matters in studies where physical activity is a study endpoint. (An endpoint in a clinical study is an event used to objectively measure the effect of a drug or other intervention.)

If the level of activity is an endpoint in a study of, say, a drug to improve fatigue, researchers need to be able to say that it is the drug that has made the difference. But anyone who has ever used a Fitbit or other activity tracker know how engaging – addicting, even – they can be. They prod you to get in that 10,000 steps with encouraging messages like, “Only 789 steps to reach your goal.”

As an article on the Clinical Research News website says, “Use of the devices could result in ‘activity peaks’ and ‘activity plateaus’ driven not by drug efficacy but as a response to the smartwatch/fitness tracker targets.”

In other words, who’s to say the increased physical activity was the result of the drug or the tracker prodding?

Before commercial trackers became so ubiquitous, researchers gave study volunteers devices that accumulated the data, but without making it visible to them. Commercial trackers make everything visible.

Besides simply counting steps, sophisticated wearables measure all sorts of activity related variables like heart rate, duration, intensity, distance, sleep and more. Because participants in studies of physical activity are able to see this data they can skew the results by working to reach targets and earn badges.

The authors of the article – “The Potential Of Activity Trackers To Bias Study Results” – suggest a number of measures researchers can take to mitigate the influence of these devices including prohibiting participants from wearing them, establishing baseline physical activity levels and choosing endpoints less likely to be influenced by the trackers.

Ultimately, the writers say, “Additional research is needed in this arena… More certain is that the unblinding of study data could have far-reaching if unintended consequences by introducing bias into the data analysis process.”

Photo by Andres Urena on Unsplash

[bdp_post_carousel]