Round Table: How do I know a scientific paper isn’t junk?


Round Table: How do I know a scientific paper isn’t junk?

 

Round Table

Science and Integrity

How do I know a scientific paper isn’t junk?

 

Note: 4th in a series on Integrity in Science and Medicine. Future presentations are planned to include an annotated bibliography on clinical trials looking at fungal colonization of nasal passages and sinuses; phagocytic cells such as eosinophils; eosinophilic major basic protein and TGF beta-1.

The science strongly suggests that there is no basis to treat someone for months with anti-fungals for fungi that are supposed to be doing bad things because they were found in the nose. In a nutshell: They aren’t. Read why in the next essay.

 

* * *

 

The subject of this report is the recent proliferation of online science journals that aggressively market publication to targeted authors for a fee.  These new journals will publish papers quickly though usually without much scrutiny, providing free online access to the public to new information.

The following are issues for which the scientific community needs to review and set standards:

  1. If there are problems in the methods of a study, how does that affect the reader’s ideas of what is worthwhile science versus junk science?
  2. How about lab testing?
  3. How can the interested reader know what claims are junk and which aren’t?
  4. Who and how competent are reviewers?

 

Don’t forget that this is the era of fast publication, and when costs of $5000 for a quick citation used to promote a product are just an advertising cost of doing business. The public might unknowingly be duped by lack of scientific integrity.  Some routinely observed problems with on-line journals include sloppy peer review; absence of acceptable or aberrant methods; rampant speculation; no restriction on generalization; uncontrolled and unfounded assumptions; and invariably (among others), an inadequate or biased reference base.  Finally, the desultory treatment of control populations, often the most difficult aspect of clinical trials, is almost universal in journals that sell authorship of bundled papers (Beware the “Ides of Special Editions”!) to special interest groups.

 

How can you recognize junk science easily? 

1. Look for three things: Methods, methods, methods. If what was done doesn’t make sense, don’t even read the conclusions. 

2. Look for decent control groups. If the patient is his own control for an intervention study, that approach works. But if you see someone looking at cases to make a suggestion for testing (money, money, money), and no control group is presented throw the study away and don’t buy the product.

3. If the study is full of unsupported speculation or uses dubious authorities (Just imagine reading, “According to the Mitochondrial Society of Pocomoke.” That unreferenced citation won’t cut it), toss it.

4. If the paper doesn’t use a prospective study design, it cannot possibly determine causation.  And if the prospective design ignores reasonable variables, junk it is. 

5. Conflicts of Interest.  Oh my, this one is so obvious, how can concealment ever be permitted? And then see a co-author who was also an editor of the same paper!

6. Biased populations. Don’t tell me about a universal cure when the only participants are men or women of one age group.

7. Absence of delineation of the methods used to validate antibody testing in ELISA (see attached opinion on unreliable ELISA testing).

8. Claims for treatment benefit without documentation of before and after data.  This one seems so blatant; it usually is some sort of opinion piece in which the author is building a case for their approach but we never see what happens with any intervention.

9.  Ethics review.  If one is going to write about human research, someone better have overview from an IRB. Sure IRBs are expensive and who wants to fill out all the forms?  But IRBs, like the IRS, can grind exceedingly fine. Don’t cheat the public by not having a licensed group review methods and ethics, especially if one is selling something, either directly or indirectly.

10.  Thorough references.  When anyone is referencing publications to make a point in one’s own favor, omitting opposing points of reference is just plain academic fraud.

 

Time tested research standards were posted in December 2014 and are referenced here.

 

The problems of bias and compromised integrity aren’t confined to on-line journals, many of which are rigorous and reputable.  We see lousy methods and lousy control groups in the best of print journals too.  Do I need to mention the well-known editorial bias of some of the most prestigious print journals? How is a patient, one who might have a chronic fatiguing illness, with deficits in executive cognitive function, who is seeking hope and help ever to know what is junk and what isn’t?

 

We posed a number of questions that these concerns (Junk? Not junk?) prompted to a group of experts for comment and consideration. Their comments are included as a Round Table, a chance for careful opinions to be presented collegially, without bias or motivation to skew their opinions for possibly unstated gain.  None of our experts have any conflicts of interest to report.  The individual opinions expressed are those of the individual experts; not necessarily www.survivingmold.com or the other members of the panel.

 

Our participants include Judy Mikovits, PhD and Frank Ruscetti, PhD (JAM/FWR); Russell Jaffe, MD, Ph.D.; (RJ); Lisa Petrison, PhD (LP); MariBeth Raines, PhD (MBR) and representing www.survivingmold.com, Ritchie C. Shoemaker MD (RCS).

 

Russell Jaffe trained in Internal Medicine and Biochemistry at Boston University Medical Center’s University Hospital. Russ was awarded a Public Health Officer Commission and Residency in Clinical Pathology at the Clinical Center of the National Institutes of Health. He is board certified in Clinical Pathology and subspecialty certified in Chemical Pathology. As a methodologist, he has contributed over a dozen fundamental or gold standard reference methods. As a metrologist, he has pushed the limits of technology, improving precision and reducing variance. Dr. Jaffe has been awarded numerous fundamental patents. Dr. Jaffe maintains a Fellow status at the Health Studies Collegium research foundation, in the American Society for Clinical Pathology (ASCP), the American Association for Clinical Chemistry (AACC), The American College of Nutrition (ACN), the American Academy of Allergy Asthma and Immunology (ACAAI) and the Royal Society of Medicine (RSM). His current activities include being Chief Medical Officer and CEO of ELISA/ACT™ Biotechnologies, PERQUE™ and MAGique BioTherapeutics™.

Lisa Petrison (LP) received her PhD in marketing and social psychology from the Kellogg School of Management. She served as a tenure-track professor at Loyola University, Chicago, Illinois. She is currently executive director of Paradigm Change.

 

Francis W. Ruscetti was born in Boston Massachusetts in 1943.  After graduating from Boston University and serving in the US Air Force, he received his Ph.D. in microbiology from the University of Pittsburgh in 1972. For the next 42 years (39 at the National Cancer Institute) he studied immunobiology, retrovirology and stem cell biology and developed considerable expertise in these areas.  He was co-discoverer of IL-2, IL-5, and IL-15 although all these designations were subsequent to their discovery.  He was co-discoverer of human T cell leukemia virus, the first disease causing human retrovirus.  He was also the co-discoverer of the regulatory effects of transforming growth factor-beta on hematopoietic stem cells.

 

Judy Mikovits, PhD earned her BA from University of Virginia and her PhD in biochemistry and molecular biology from George Washington University. She has undertaken a quest over the past 35 years to understand and treat chronic diseases. She has studied immunology, natural products chemistry, epigenetics, virology and drug development. In 2006, she became attracted to the plight of families affected by ME/CFS and those dealing with autism. She has played a major role in demonstrating the relationship between immune dysfunction and these diseases.

 

Maribeth Raines PhD has over 20 years of experience at Nichols Institute, Quest; and is currently VP of Pacific Biomarkers. She is a foremost expert in companion biomarkers and test validation.

 

Ritchie Shoemaker MD just wanted to be a rural family doc when he finished Duke Med School in 1977.  That was fine until Pfiesteria came along in 1996 and changed the focus of his career from staving off death and disability in primary care to unveiling mechanisms of inflammatory responses caused by exposure to environments that hosted biotoxins. That quest has taken him into the detail of countless basic sciences and now, genomics.

 

Starting us off is a general comment:

 

JAM/FWR:All journals have the possibility of bias and conflicts of interest. The problem is pervasive and a greater problem in recent years with so many online journals that essentially anything can get published without appropriate review. The controversy of XMRV is an example of total corruption of the system led by the journals Science, PNAS, Retrovirology and PlosOne.  The details are all documented in Plague (NB: available on Amazon) (www.plaguethebook.com).

 

Unfortunately, all of the above are possible pitfalls not only for the non-scientific reader of these papers but also of the scientific reader of these papers. It is our opinion that the system is totally broken and that peer reviewers should be named in order to reveal potential conflicts of interest as well as lack of expertise in a field.

 

More than inadequate references are often biased use of references to exclude data, which disagrees with authors. We can easily walk through a few examples of this but feel this topic is an entire roundtable in itself. Frank and I used to teach graduate classes where students were given two publications with the same data but opposite conclusions were drawn and ask the students to detail why?

 

1.      What is peer review

 

RJ: Classical and meaningful peer review occurs when knowledgeable colleagues have the time and support to thoughtfully review and critique a manuscript. Three such reviewers are sufficient. It is Ok for authors to recommend and also to dis-recommend reviewers.

 

LP: At least in my field (marketing/psychology), editors accept submissions and then send them out to members of the review board or to ad hoc reviewers. The reviewers send back comments along with a recommendation about whether to accept, ask for revision or reject. Editors vary in terms of the extent to which they always follow these reviewer recommendations. The process is supposed to be blinded so that reviewers aren’t influenced by who the authors are (though the identities often can be guessed) and so that authors don’t harbor a grudge against negative reviewers (though the identities sometimes can be guessed).

 

RCS: peer review remains the bastion of modern academic science.  We submit manuscripts to a journal, hoping the Editor will refer the paper for review. The paper is anonymous to the reviewers and the reviewers are unknown to the authors.  Comments of reviewers are supposed to be critical, but fair. Honesty is assumed. In my experience, with nearly 20 papers published, peer review is the weakest link in the publication process as one person can hold up publication for unknown reasons for an indeterminate time.

 

FWR/JAM:anonymous peer review is 2/3 people in the field of your paper who are considered to be experts in the field who are not working with you and generally have not published with you on this topic. Editors can override peer reviewers although this is almost never done (it was done by the editor of Retrovirology to allow five very poorly done negative papers to be published simultaneously on December 22, 2010).

 

2.      Have you been a peer reviewer?

 

RJ: Yes

LP: Yes

RCS: Yes

FWR/JAM:  Yes. On hundreds of papers and grants for more than 30 years. JAM graduate school had a mandatory class on peer review. Comments to authors should be constructive and without personal bias, a peer review should strengthen a paper not tear it down.  Peer reviewers make different comments to editors which helps the editor know.

 

3.      Do you have an opinion about on-line journals having guest editors sponsor and also pay for several additional papers, often in which they are co-authors, all the while providing review as well?

 

RJ:  Pay for papers or for reviews is at best a slippery slope. Pay to help improve the clarity of the article and/or to verify that the statistics are accurate is a different matter.

RCS: I have been offered by multiple journals a deal where my paper will be free if I bring in four or five others who pay. I would be named as a co-author for each of the other papers and would be responsible for peer review. I see this same fraud offered repeatedly.  When I see a “special edition,” I want to know who is being paid and who is profiting from publication.

LP: I have not heard of that happening.  It does not sound very ethical. How journals operate should be transparent to readers.

FWR/JAM:Generally, guest editors do not co-author papers in the special editions, they simply summarize all of the work in the field.  This can be an excellent way to catch up on what is new in a given field. If there is controversy both sides shouldget equal time with the guest editor serving as moderator to keep it professional. The situation described above is a conflict of interest and a commercial and should not be done.  If I would read this paper it would be with suspicion.  I probably wouldn’t read it at all.

 

4.      Is there ever a time when conflicts of interest may be concealed by an author? If concealed, could conflict of interest create a lack of objectivity or bias?

 

RJ: No.  Yes.

RCS:  No and yes, absolutely concealed conflicts destroy any thought of integrity.

            FWR/JAM: NO, NEVER. Concealed conflicts are not acceptable. Of course.

            LP: No. Yes.

 

5.      What other sources of bias do readers need to be aware of

Manipulation of control groups: JAM/FWR: clear omission makes Data/conclusions suspect

References: JAM/FWR: incompetence or fraud

 

RJ: Meta analyses with methods designs for self-fulfilling outcomes as has happened too often through the Cochrane Collaboration.

RCS: One Environmental Medicine (AAEM) paper on treatment of mold problems had 220 or so references and none of them cited our group’s papers. None. A recent paper on possible mold problems in sinuses that suggested treatment ignored the overwhelming ENT literature and allergy literature that showed that idea was pure baloney. Reviewers were clearly negligent on both cases as were the authors. The worst intentional offenders are ACOEM in 2002 and 2011 (the verbiage in these “opinions” is the same), exceeded only by AAAAI opinion on moldy buildings of 2006. At least the AAAAI opinion was taken down.  It did receive special attention from the 2008 US GAO report as particularly worthless in that it eliminated from discussion the very subject that is the source of the problem; inflammatory immune responses.

 

6.      What are control groups?            

 

RJ: Comparables subjected to same as experimentals

RCS: regarding mold, “normal” people living randomly in places without evidence of exposures and without evidence of untreated illness.

LP: Controls should be randomly selected individuals who do not meet the case definition for illness.  For cases, they should be individuals who meet a case definition. IN CFS/ME, where there are several definitions, the most rigorous definitions should be used.


7. What are cases?

           

RJ: Units from which studies are derived

RCS: People who meet a carefully stated case definition. If a person can’t show all elements of a clearly identified and accepted case definition then the person isn’t a case. We saw this rule violated in a mold/CFS paper.  Trying to “make Cinderella’s shoe fit,” even though it has no chance, isn’t science. The reason we have a case definition is so that people in Bozeman, Montana can talk with people in Biloxi, Mississippi and all will be talking about the same thing.

LP: see above

FWR/JAM: Cherry picking patient and control groups clear violation

 

8.      If authors of paper are trying to define cases by comparing them to controls, do the authors have the duty to present features of control groups to validate them as controls?

RJ: Yes

RCS: Yes and more rigorously than a casual reader would ever expect.  Lousy control groups are the death knell for countless case studies. We see this problem all the time in the few non-CRBAI papers published in CIRS-WDB.

LP: Of course control groups need to be validated. A control group that is recruited from friends and families of cases is immediately suspect. Selection must be random.

JAM/FWR: Yes. Unbiased. Age, sex matched, zip codes, historical controls.

 

Or can we simply accept the authors’ suggestion that control groups are validated?

RJ: No

RCS: No. Trust no one when it comes to controls. If some one tries to tell us their work is controlled without showing us it is unlikely they have controls.

FWR/JAM: No

 

9.      If such data is not presented, would a peer reviewer ever accept the idea that control groups are validated?

RJ: No

RCS: No paper with sloppy controls should ever pass peer review, no matter how much they paid for publication.  But I see it all the time.

LP: Reviewers vary widely in quality; some might accept the absence of decent controls.

FWR/JAM: Peer reviewer should ask for clarification

 

10.  If an author is trying to show that an exposure, say to a water-damaged building (WDB), results in illness parameters what does the author have to show about types of exposure to WDB compared to controls?

Ex: ERMI, HERTSMI-2 air samples

Visible mold, musty smells

 

RJ: Worthy of a symposium combining NIST, ASHRAE and IEEE

RCS: We are back to the case definition.  One must satisfy the requirement that there is exposure and show for each person how that requirement is met.

LP: I’ve not found that smelling mold or even seeing mold (except on ceiling tiles) has correlated very well with how I have felt in particular buildings. So just for that reason alone, I am a little skeptical about papers that use that as a variable.

 

As tests go, the ERMI seems pretty strong. It was developed by the EPA, is done by several labs, and seems to be pretty well-validated. So that is a measurement that I would think that many people would be inclined to use.

 

But I don’t think that people have to use the ERMI if they want to study the effects of indoor mold. They can use other measures if they prefer, or if they can’t afford the ERMI, and then see what kind of results they get.

 

11.  Can simple use of symptoms alone ever be used to justify labeling an exposure as causative when objective standards, previously published, and gathered prospectively are omitted?

 

RJ: No

RCS: I get to see this approach published too.  Distressing that such limited confirmation is recognized as having credence

FWR/JAM:Absolutely not! Symptoms do not show causation

 

12.  Does this omission impact the credibility of the study in question?

 

RJ: By definition / at first principles

RCS: Yes.  With no objective standards, no study.

LP: It depends on the study and its goals.

JAM/FWR: Yes

 

13.  If such data is not presented, would a peer reviewer ever accept the idea that control groups are validated?

RJ:  No

RCS: No

 

14.  Let’s talk about lab tests. Does an author have to validate tests that are FDA approved?

 

RJ: Author has obligation to include whatever quality control was done on the run / lots / specimens used in the study. FDA Bureau of Devices approval is mostly about the logistics of tests and much less about the meaning of tests or results. It is the obligation of the Lab Director to make available such data and any statistics they may keep on that information if so requested by investigators.

RCS: Methods needs to be clear what tests were done and how the tests were validated.

LP: If tests have been approved by the FDA and are performed by multiple laboratories I feel somewhat confident that the test is actually measuring something real and that careful validation by other researchers using the test is less necessary.

MBR: This depends on the lab and the use of the test.  Any CLIA lab has to verify the analytical performance of an FDA cleared test.  If the test is modified or used for something different than the intended use, more extensive validation may be required.

JAM/FWR:Not if they are being used in exactly patients and sample types (matrices) validated by the FDA. But any validated kits must be validated in individual labs. Package inserts should never be used as validation or SOPS

 

15.  How about tests that are not FDA approved?

 

RJ: Same answer as answer 18

LP:There is a lot of money to be made from developing tests and then selling them to the public. At least twice in the history of CFS, many patients have paid large sums for high-profile proprietary tests that had been subjected to peer review but eventually turned out to have problems. The first was test for the “ciguatoxin epitope” and the second was the tests for the retrovirus XMRV.  I have written about this test in detail at www.paradigmchange.me/xmrv (Editor’s note: comments on Dr. Mikovits’ book, Plague, are found at this link).

 

In light of these incidents, I am hesitant to believe that any proprietary test is actually doing what it is supposed to be doing unless it has been validated in a blinded case/control method by an independent researcher.

 

The test that I am thinking about is a laboratory-developed test for mycotoxins in urine. There is one peer-reviewed paper on the development of this test. But this paper, like the XMRV paper is hard for me to evaluate as it uses methods I am not familiar with.

 

The XMRV debacle makes me think that even highly specialized experts cannot spot major method problems in this kind of paper from afar, and therefore that it should be considered as only a first step in the development of a test - rather than as conclusive proof that the test is measuring anything real.

 

If researchers want to show that a test is able to distinguish patients with a given disease from non-patients, then they must be using their own control groups rather than making assumptions about previously published work.

 

If researchers are investigating mycotoxins in urine of CFS patients then we expect to see blinded control samples and blinded case samples. As it is I don’t know what to make of that study.

 

MBR: Yes, but extent of validation depends on the lab accreditation and intended use.  Lab must state that test has not been evaluated by FDA and may state for research purposes only.

JAM/FWR: YES OF COURSE

 

16.  If a lab has a CLIA certificate, does the test sold by that lab have to be validated by published academic papers?

 

RJ: CLIA is about operation of labs and nothing about commercialization.

MBR: Academic published papers have nothing to do with validation of an assay.  As stated, validation has to do with the analytical performance and intended use.

JAM/FWR:NO, research papers do NOT validate clinical studies. And any testing based solely on research studies must say the test is research use ONLY and should not be used to make treatment decisions.

 

17.  How is a new test validated? Does the validation of the new test have to compare test results to known published controls and additional methods of measurement?

 

RJ: Usually there is some comparative standard. Sometimes labs publish reproducibility and let the marketplace decide operational validity.

MBR: Validation parameters are set by the laboratory’s Quality Management System and can vary between labs but should be aligned with the CLSI guidelines.

FWR/JAM: Sensitivity (false negatives), specificity (false positives), validate all sample matrices (Serum, plasma, tissue culture, media, urine, CSF, etc), precision, cross talk, interference.  Clinical validation must be done in thousands of samples both positive and negative.

 

18.  Are ELISA tests antibody based?

 

RJ: By definition: Enzyme Linked Immune-Sorbed Assays introduced by Berson and Yallow in the 1950s and for which they were awarded the Nobel Prize in 1977.

MBR:  Most are, but there are some that may use affimers which are not antibodies.

 

19.  Can an antibody recognize more than one antigen?

 

RJ: Almost always.

MBR: Antibodies recognize epitopes and epitopes can recognize common epitopes in different antigens.  For example, anti-phospho-tyrosine measures phospho-tyrosine in numerous different proteins.

FWR/JAM:polyclonal antibodies by definition recognize more than one antigen. Monoclonal antibodies can also recognize more than one antigen. Particularly, if epitopes are conformational. A classic example is the monoclonal antibody 7C10 used in our XMRV studies. It was fully characterized to recognize all known polytropic and xenotropic MLVs (gammaretroviruses). It does not recognize Beta retroviruses of any kind including endogenous beta retroviruses (of all tested to date). So for a publication using that antibody to conclude cross reactivity without showing data from a single experiment, thorough dose response titrations of all antibodies or referencing any of the previously published studies detailing the characterization of that antibody is worse than incompetence: its fraud.

 

20.  What is an epitope? Could an epitope cross-react with an antibody and give the impression of presence of a different antigen?

 

RJ: Epitope is anything recognized as immunologically distinct. This includes classic antigens (glycoproteins; lipoglycoproteins; peptides over 1,000 Daltons), haptens (small molecules that alter the shape of whatever they bind to and render it operationally foreign to that organism), and lectins.

MBR: Epitope is what the antibody binds to and it can be a conformational aspect of an antigen or based on amino acid sequence.  The specificity and cross-reactivity of an antibody may be required as part of assay development and validation.  Polyclonal antibodies are by nature a mixture of epitopes while monoclonal antibodies should recognize just one epitope.

FWR/JAM: From Wikipedia: An epitope, also known as antigenic determinant, is the part of an antigen that is recognized by the immune system, specifically by antibodies, B cells, or T cells. For example, the epitope is the specific piece of the antigen that an antibody binds to. The part of an antibody that binds to the epitope is called a paratope. Although epitopes are usually non-self proteins, sequences derived from the host that can be recognized (as in the case of autoimmune diseases) are also epitopes.

 

Antigens and epitopes are usually non self; but if they recognize a cellular protein, it is called an autoantigen. And cross reaction between a microbial and self antigen is called molecular mimicry.  This happens in several diseases including HTLV and HIV associated neuroinflammatory disease.

 

Chronic recognition of an autoantigen results in autoimmune disease

 

21.  If an antibody test recognizes more than one epitope, does an author have a duty to demonstrate that the antibody isn’t simply reflecting presence of an unknown epitope?

 

RJ: Yes. Cumbersome and doable studies using blocking techniques of one sort or another can usually tell how much cross-reactivity is mixed in with the ‘true’ reactivity.

MBR: It is the reviewers that should question specificity of an antibody by an author…there are cost implications to authors that may prohibit verification.  Also there may be misrepresentations by a vendor that are not verified.  Although this should be addressed by CLIA labs, typically the manufacturer’s claims are accepted instead of evaluating…again to cut the development costs.  This must be demonstrated for FDA cleared kits and Analyte Specific Reagents. 

RCS: If the antibody is polyclonal, say so up front and don’t let anyone announce to the unsuspecting that the antibody is specific when it isn’t. This series is about integrity.

FWR/JAM:an author has a responsibility to show all data and do every possible control in the characterization of every reagent in a clinical test. What is described here is crosstalk. And that is a necessary validation of ELISA diagnostic tests.

 

22.  So, if epitopes are not defined, can an author ever conclude that the given antibody is specific for one antigen or another?

 

RJ:  No. There is a bigger problem in that serology or antibody assays are physical chemistry, either qualitative as dilutions or quantitatively. Function is not possible to measure. Antibodies can be helpful and neutralizing or harmful and complement fixing. Knowing there is an antibody does not tell if it is helpful or harmful; friend or foe.

MBR: Antigen specificity can be verified without mapping the epitope. 

 

23.  What factors are important in assessing presence of very low concentrations of antigen in urine samples, understanding that urine is an exceptionally complex matrix?

Ex: protein binding; pH; osmolality; crystalluria; pyuria; hemoglobinuria

bacterial contamination; temperature

 

RJ: Dr Donald Young developed sulfamic acid as a tool to allow urine collections to be more useful in physiologic or biochemical studies.

Choosing the correct specimen, controlling properly for pre-analytic as well as analytic and post-analytic variables as well as performing internal blind split samples as part of routine quality control is highly desirable.

MBR: Absence of non-specific protein binding, proteases, pH, ionic strength

 

23.  If protein binding could alter antigen detection, wouldn’t all urine parameters have to be defined to show absence of confounding?

 

RJ: Matrix effects have long been accepted as variably variable in lab tests.

MBR: These aspects need to be considered as part of sample collection and stabilization.

 

24. What does the term false positive mean, especially re epitopes in urine? In tissue?

 

RJ: False positive as defined in Beyond Normality by Galen and Gambino is the test registering as result when the source specimen is not affected by the analyte being measured.

RCS: Another part of false positive is exposures from sources other than buildings, especially foods.  Differential diagnosis is needed to show false is actually true.

MBR: False positive refers to specificity and needs to be established as part of the clinical utility of a test.

 

25. What does reproducibility mean?

 

RJ: Reproducibility in regard to lab results usually means how split samples, either blind or concurrent, compare.

MBR: Typically, reproducibility relates to accuracy and precision but may also be linked to robustness and ruggedness, the latter two being part of a bioanalytical validation.

FWR/JAMIn clinical testing reproducibility is the precision of the test. Intra- laboratory and inter-laboratory precision must be demonstrated on all clinical tests and should be done every time a reagent lot is changed particularly for antibodies (lots and manufacturers of antibodies MATTER and validation of an assay must be done every time the manufacturer lot of antibodies or any biological reagent changes). NIST defines inter laboratory reproducibility as:  the variability between single test results obtained in different laboratories, each of which has applied the test method to test specimens taken at random from a single quantity of homogeneous material.

 

There are NIST and FDA standards for every type of validation describing how the tests are to be done.

 

26.  If a test is not shown to give reproducible results in a lab, is there any reason that test could be sold to the public as reproducibly reliable?

 

RJ:  No. May be used for investigational use (a category FDA Bureau of Devices sometimes uses; usually for that reason).

MBR: For CLIA laboratories, the accuracy and precision should be defined and documented in their Quality Management System and could be subject to review by regulatory bodies (FDA, CMS, CAP, and NYSDOH).   FWIW, NYSDOH requires the performing laboratory for all tests used for patient management for their validation reports to undergo review and approval before being used on patients in New York State.

FWR/JAM:Of course NOT

 

27.  Are there risks in using tests that aren’t reproducibly reliable or specific in diagnosis and treatment?

 

RJ: There are always risks in any life situation. My understanding of the College of American Pathologists and the American Society of Clinical Pathologists is that it is the obligation of the lab to quality control the results and the obligation of the clinician and/or scientist to know the context and meaning of the results.

MBR: Yes if they are used for patient management as indicated above.

JAM/FWR: OF course all treatments have risks so to treat for microbial agent based on an unreliable test is dangerous and malpractice for the physician

 

28.  We’ve seen ELISA tests for Lyme disease be recognized as having false negatives and false positives. Yet the FDA now suggests using ELISA testing to diagnose Lyme. Is there a problem with this logic?

 

RJ: I am unaware that FDA approval of a diagnostic kit has anything to do with what the test does and only to do with how the test operates including rudimentary quality control.

MBR: There are numerous tests that have either high false positive that should reflex to a confirmatory test.  False negatives are a bigger problem but needs to weigh the rate of false negative with the risk of missing diagnosis.

 

29.  If we were reporting ELISA results for very low levels of antigens in tissue or bodily fluids, what role is there for back-up, say HPLC, GC or MS.

 

RJ: High resolution ICP/MS, GC/MS, and effects on cell cultures are all likely to give confirmatory signals about issues of concern as well as about interdependent, relevant information in regard to other variables in the scientific equation subsumed in the initial question that caused the serology/ELISA test(s) to be done.

MBR: Confirmation is important.

 

30.   Are there other examples of tests that used ELISA in the CFS world that turned out to be unreliable?

                  Ex: Ciguatoxin assay

 

RJ: Too numerous to count

JAM/FWR:WB and culture: VIPDx for XMRV

 

31. Did the lab selling Ciguatoxin assays have to refund fees patient paid for unreliable tests?

 

RJ:  Good question... Justice Organization might know

LP:  I’ve not heard of any ME patients getting their money back for that ciguatoxin test.  When the XMRV test was found to be unreliable, patients who had paid for the test were not refunded the money they had paid for the test. The argument I heard for why this was okay was that this was an experimental test to begin with and patients should have realized that the results might not be accurate.

Caveat emptor.

FWR/JAM:I don’t know if they refunded them but that would be the ethical thing to do. When the results of the blinded blood working group paper of XMRV came out showing there were no reproducible tests for XMRV, several authors including the senior author asked for funds to be returned to patients.

 

32. Does a lab selling unreliable tests have a duty to refund money patients paid when the test is known by the lab to be unreliable?

 

RJ: Makes sense to me. Medicare fraud unit agrees.

MBR:  It is typically the responsibility of the regulatory bodies, be it the state who licensed the lab, the CMS, FDA, or DOJ to fine the laboratories for inappropriate testing.  The money is not typically refunded to patients.

FWR/JAM:No one is responsible for exotic biology.  There is something about a test that renders it unreliable but once a test is known to be unreliable the ethical thing to do is refund patients’ funds (see above).

 

Questions for our readers.  We are talking about integrity!

 

A. Can a PhD ever be a medical director of a lab?

B. Can we accept an opinion-based paper that somehow deletes any discussion from prior work that doesn’t support the author’s opinion?

C. Should physicians be allowed to reap financial gain from a lab for referral of cash-paying patients to that lab?

 

Please send your answers to the website: Info@survivingmold.com


Featured Resources for Community

Related Resources for Community