I recently attended a lecture where a commercial organisation proffered their nous in designing and developing lead-compound kinase libraries. It was interesting stuff but the underlying technology had the feel of just a commercially polished version of the automated HTS library hype of the late 90’s/early 2000’s (that as far as I am aware never materially contributed to something even approaching a blockbuster drug either).
The lecturer was adamant that his organisation could produce the same compound at scale with the same impurity profile, be it 2mg or 20g of substrate. Put another way, the compound provided can have the same potential API and crap in it, irrespective of how much is manufactured. As bad as that may initially sound, it is a good thing. The purity was measured by UV.
Of some interest to me was his claim of at least 96% compound purity by UV. This is an interesting claim, mainly because without having characterised the compound, and established for example the molar extinction coefficient for the wavelength for all compounds in question to work out an adjusted area purity, it is just not possible to claim 96% purity. His throwaway comment was also a potential man-trap, perhaps deliberately introduced into the lecture to prompt someone in-the-know to ask a question, and to then be fed a preprepared reply. I decided to take the plunge and pose my question however. His response was surprisingly frank, unexpected, and I paraphrase: “It is a meaningless value in terms of purity, but the claim does have considerable value for marketing and QA purposes”; he meant to say percentage area but insisted on using percentage purity, and he knew it. The response quickly diffused the situation shutting down a conversation to somewhere he just didn’t want to go.
To elaborate the point further, the UV chromatogram below is clear – ca. 37/63% is 37/63% by area, but could be 96% purity at around 25.6min, or even 1% purity of the peak at 25.6% – who knows? Claiming purity based solely on peak area, without other information and an understanding of the science, is just unsound. The QA guys bought it through – the metric had flaws, they understood that bit, but the value still had a currency of sorts as they could ‘measure it’ for compound acceptance criteria.
For a living, I now primarily write software, my days of bench chemistry are well and truly behind me, but I do keep my ‘oar in’ with synthetic organic chemistry. Without much thinking, helped by the common use of terms such as ‘acceptance criteria’, I immediately related this meaningless “96% purity” value to another meaningless value with a fancy name “Code Coverage” (or “Test Coverage”). All modern-language software developers will be familiar with the term.
For some reason my apparent code quality (and that of many developers) is still measured by Project Managers and QA officials with code test coverage, as some sort of meaningful metric for code quality (I used a Continuous Integration Build server recently that even reported Code Test Coverage as part of the build result). It is really quite appalling and I continue to not understand why Code Coverage still features so high on QA acceptance and project deliverables. I refuse to argue the same point over and over but I summarise it here: I can have 100% Code Coverage with tests that all pass that mean nothing about code quality, nor that the code will even work or is even robust. In traffic light terms – there’s a sea of green on an arbitrary dashboard (no red or amber that keeps Project Managers happy). This is a false measurement of …. everything. Why this fad of yesteryear continues to be used I do not understand: targeted code-coverage tests == great; the goal of 100% code-coverage for the sake of it == stupid; code coverage threshold criteria in QA Acceptance and Project Plans == stupid.
Yet code coverage like the arbitrary meaningless purity by peak area can be counted, so it’s counted, and the acceptance criteria is meaningless on both counts.
— Published by Mike, 14:53:43 11 September 2016
By Month: November 2022, October 2022, August 2022, February 2021, January 2021, December 2020, November 2020, March 2019, September 2018, June 2018, May 2018, April 2018
Apple, C#, Databases, Faircom, General IT Rant, German, Informatics, LINQ, MongoDB, Oracle, Perl, PostgreSQL, SQL, SQL Server, Unit Testing, XML/XSLT
Leave a Reply