By Ian Mullan, A&O co-founder, 30 May 2021.
In my previous blog entry, I briefly spoke about what makes for an effective metrics approach and the need for contextual metrics. In this entry, I will give some insight into why metrics initiatives often miss the mark or are complete failures.
Many study teams still rely on the limited metrics that are on offer from the EDC platforms themselves. Whilst some of them are attractive to look at and easy to access, they still miss the mark in terms of being really meaningful. Where are the worst performing sites in relation to the number of total completed patient visits for example?
A number of tools have sprung up and evolved over the past 10-15 years. We’ve all heard of JReview, Tableau, BO, Spotfire, to name a few. These can be very useful for single-table metrics and can be visually impressive. But in terms of true contextual metrics that can also combine with external data, it will require double table cross-over and mapping. They all fall flat for these main reasons:
Almost all studies are different in design
Cannot simply ‘plug into’ various data sources
The programmer is not experienced in clinical data
People are self interested and wish to reinvent the wheel
People assume this stuff is achievable 100% automatically (or don’t want it to be automatic!)
Wrong choice of software
Outputs are too restrictive/hard coded
It’s hard to find programmers who know the platform when it needs updating (Or the programmers don't really understand what metrics are about!)
Let us look at these points, in turn.
1) Almost all studies are different in design
This is an obvious given, with so many therapeutic areas and CRF-build models. However, even within same systems and disease areas, studies can vary greatly. For example, to understand where a patient is on the study (Disposition) requires extracting data from patient status tables. While this might be neatly rolled up into the DS domain, it might also be spaced out into several raw data tables at various points in a patient’s journey (screening, treatment, end of study, safety follow-up, etc). As all studies are different, to have a single and familiar metrics tool requires a ‘reactionary’ approach, especially from someone with a history of clinical trial study builds. To assume that the latest glitzy programming platform will handle anything other than top-level flat metrics is largely a fantasy.
2) ‘Plugging into’ various data sources
Typically, a clinical unit will have numerous data sources that will need to be accessed to create metrics for. For example, a CRO might be using Inform, RAVE, or paper. Similarly, a pharma may have recently acquired a biotech using another CRF system altogether with studies running for months or years. In either business environment, the challenge is the same: how can metrics be extracted from different CRF platforms and be presented in the same, familiar, metrics tool? Even if a metrics tool has been successfully built for a CRF model, what about combining it with external data for essential other KPIs (e.g. sample tracking)? For this to become possible, a system-independent tool is required; a tool that sits outside of an EDC tool that can have various data sources fed into it (See figure 1 below as an example), as opposed to a tool that is designed and intended for only one, single, system.
Figure 01: Graphic showing how PK reconciliation can happen in practice; in principle just as metrics can be performed. Taken from our PK blog: Data Musings: PK reconciliation - what happens when your data has no sample ID?
3) The programmer is not experienced in clinical data
This may not just encompass the programmer, who may be an excellent programmer in their own right but may not fully understand the clinical data flow or the importance of including certain datapoints. Disparity can also be introduced by a project manager with no deep knowledge of trials and are not placing emphasis on the right kind of KPIs. Either way, if you ditch the data manager and you have ditched the very thing that will make a unifying, contextual metrics tool, available. You need the right people who know what metrics are required for the study team.
4) Self interested individuals / reinvention of the wheel
A successfully working metrics tool is a great thing to have. But invariably someone will always find something wrong, something that could be improved upon, and rather than seek improvement will simply scrap the one tool that works. Instead a decision is made externally of the team (often by someone who is not qualified) to go for the latest software fad solution and restart the metrics project ALL OVER AGAIN. The external implementation team will label themselves the ‘champions’ of improvements yet, very often their solution is a short-lived affair that will soon no longer be favoured by the end users. An existing working metrics tool only exists because someone kept it simple and ultimately it achieved what it was made to do! If a metrics tool is still regarded as a stop-gap, then make sure it is kept simple and created in software that has been around for a long, long, time, and will for continue to be around for just as long. More importantly, put it in software that study team members want, that requires no download or hours of training. Excel is an obvious choice and in a functioning management system, opting to reinvent the wheel of a current working solution suggests something is very wrong with the management of that organisation
5) People assume this stuff is achievable 100% automatically (or don’t want it to be automatic!)
I recall an occasion, having successfully developed a full-range CRF KPI metrics tool, the complaints from a young new-starter (with no more than a couple of months DM experience) that it required 5 minutes of extracting data and pressing a big fat button to make it fire up. Obviously, this young person felt one of the biggest world industries couldn’t possibly require anything manual at all (even pressing buttons!). For 5 minutes of work once a week, it gives live-time data summaries and any kind of report the study team would desire. But, no, 5 minutes is far too long for some, even if it has replaced 6 hours and eliminated human error.
(As a reminder here: to have a tool that works on all systems that is also capable of ‘marrying’ external data, the tool must be system-independent. And so, by definition, it will always require extracting data and uploading into the tool. Due to this, the process will require logging into systems and extracting raw data. In other words, for the best kind of tool, there will always be a minor element of human interaction.)
The opposite situation to this are colleagues who, it’s fair to say, use metrics compilations as a bit of a ‘down time’. I have had it said to me before, “I haven’t been a data manager for 20 years to come to work and press buttons!”.
You can’t win with such individuals either way, but receiving faith from others even when a button needs to be pressed is a key ingredient for success.
What might such a button-pressing/upload of data involve? This can be demonstrated via our ExCompare tool in figure 02 below!
Figure 02: how separate data source files can be uploaded into a single, central, metrics tool. Using A&O's ExCompare as an example. See: www.apples-and-oranges.co.uk/excompare
6) Wrong choices of software.
As touched upon already, introduce any new metrics tool in any range of software, the very first question you are likely to receive is “can I export this to Excel?”. So why not just do it in Excel, anyway? It's still shipped with all company-provided laptops, requires no extra licence, requires no new training. Excel has been here for 30 years, and won’t be going anywhere anytime soon. That’s not to knock other tools or software; it is simply a matter of listening to the customer (i.e. the study team members).
If you are concerned over security of MS Office, then remember that in this day and age, the work environment is very secure, with VPN networks and computing equipment encrypted to the highest standards. Any such methods in using Excel in DM-self-administered fashion are evidently performed in said environments. These are no less insecure than the use of other listings, data tools, extracts of, screen-clippings sent over email, etc.
Indeed, I would argue that by using a key, single, tool that centralises metrics only serves to reduce random use of other, uncoordinated, communication methods and tools. I would go further to suggest any extracts from a central metrics tool is performed by providing files in agreed secure networking platforms such as SharePoint type sites.
Figure 03: a metrics tool should have metrics relevant to each individual as exportable reports, as per this Excel example. Learn more: www.apples-and-oranges.co.uk/4site
There is always the risk an inattentive employee may send Excel creations beyond the corporate firewall. But it is possible to save Excel tools as password protected, and can also be saved as macro-enabled files that will only run if they detect they are sitting in your company’s VPN environment (a future blog on this, perhaps...). Only being operable in your company’s VPN environment adds an extra layer of security should a rogue, parting, employee wish to take such creations with them. So that leaves only the very determined hacker to crack such a creation, but then this is true of any type of tool.
It may be your company policy that, if you go down a macro-enabled route, eTMFs do not permit macro-enabled files to be stored within. The answer to that is simply save extracts as xlsx (non-macro enabled) files which will, in any case, demonstrate highly-advanced oversight for inspection-readiness.
Indeed, it is by using what some colleagues may describe as the less attractive Excel option that leads you to automatically document and demonstrate full study oversight, such as en-mass PDF reporting:
Figure 04: extracting data summaries to PDF not only provides teams with useful, secure, reports specific to them with extreme speed and zero human-error, it also demonstrates robust oversight for inspection-readiness. See: www.apples-and-oranges.co.uk/4site
What else can I really say about it? Excel can become a sophisticated tool while encompassing necessary security, even with outputs in safer PDF formats.
And if you think Excel is a wrong choice of software because it is ‘flat’, unchangeable and unclickable, thing again. Frustrated at the beginning of the COVID pandemic and the media's inability to represent COVID rates proportionally, A&O created it's own COVID interactive map (see figure 5, below). Few users realised this could be done in Excel!
Figure 05: Graphics need not be 'flat'; as demonstrated by A&O's COVID tracker, shapes can be created and made to be interactive. See: www.apples-and-oranges.co.uk/covid-19
Figure 06: Ideally, displays should be adaptable with no advanced end-user skills, as demonstrated by A&O's PK Reconciliation tool. See: www.apples-and-oranges.co.uk/pkreconciliation
7) Outputs are too restrictive/hard coded.
If your metrics reports are not giving everything your team needs, they will likely end up duplicating efforts and working things out for themselves. While we cannot avoid hard-coding the framework of a tool, giving the user the flexibility to pick and chose items will lead to keen uptake and use. For example, a monitor might have 5 sites they control, and would like to see only their sites ranked together for relevant worst-performance indicators. Users simply need to be able to pick and choose what they want to gather.
One such example is the adaptation of our 4Site metrics tool that aided in tracking COVID in the early days of the pandemic, ranking countries together - see figure 7, below.
Figure 07: live-time flexibility in visuals and outputs will give individual team members what they want. See: www.apples-and-oranges.co.uk/covid-19
8) It’s hard to find programmers who know the software type when it needs updating.
Using Excel is often met with resistance in our industry. Even though it's used every single day by most end-data users! But choose another system that’s highly spoken of, consider this: can you resource from your own team(s) someone who can update it when required? Or will you have to spend extra resourcing externally, even then taking months to find?
Most mid- to large- DM teams are likely to include colleagues with reasonable Excel skill-sets, so there is a strong chance of resourcing solutions from your existing colleagues by choosing Excel.
In Summary
To achieve the right metrics & KPI tool, you need (and need an end product that…):
• Is a reactionary approach
• Is a system-independent tool
• Has the right people
• Can provide live-time data summaries
• Has faith from others
• Requires a team listening to the customer
• Lets users pick and choose what they want
Is this just theory?
By now you might be thinking this blog may sound like wonderful theory. However, the founders of Apples and Oranges Data Solutions come from over 20 years of data management and have repeated, direct, experience of automating metrics and KPIs across several companies. One such previous metrics solution that was created by A&O’s founders was for a US pharma that had multiple study builds in accordance with ‘strict’ company ‘standards’. In reality, as the studies had originally been built by different CROs, each programmer had interpreted the standards differently and raw data variable namings were disparate. Although a known issue, it was when the metrics tool was created that the limitations of full oversight was appreciated due to study build issues (it must have kept externally hired STDM programmers very busy!).
So what software was chosen in this case? Excel, of course! We even named it creatively: "MetrEx" which hopefully needs little explanation (Metrics + Excel).
By gathering and unifying random study builds, it made it possible to then combine all studies into a ‘master’ tool. In other words, a tool that combined all studies and enabled a full oversight of the therapeutic area to aid in resourcing and comparing site by site performances (key for site selection activities). Unifying random study builds finally provided useful information that can feed into clinical operation activities, RBM, CRO selection….clearly advancing above and beyond simple metrics.
You will then be able to uncover many data issues that went beyond the originally-intended metrics capabilities because, again by gathering and unifying random study builds, it meant some very quick-fire reports could be generated via a ‘wizard’. For example, by establishing the last date of permitted data collection (e.g. a withdrawal of consent) only to find metrics coming in dated after this event, you have clearly identified major study conduct issues. All established by pressing a button! What kind of pharmaceutical would not want such remarkable, reproducible and long-term capability tool to ensure corrective action is taken on historical issues, and to ensure no repeat of this for patient-well-being and in preparation for inspection-readiness?
From simple study updates, you can go very, very far beyond metrics.
How can we help you?
For more information on all tools listed in this blog, visit: www.apples-and-oranges.co.uk or our youtube channel.
If your organisation needs assistance with metrics, please contact A&O. We can provide your department with additional tool features including:
Ability for any user to self run (feed in EDC, lab file)
Automatic review
Ability to store and retain user comments = no need to re-review ALL data again
Provide 'live' data overview
Portfolio-wide PK KPI dashboard
Comments