Information

Determination of Ageing by ECG inclusions/exclusions?

Determination of Ageing by ECG inclusions/exclusions?



We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

I am studying ageing and considering ECG signal because of its high sensitivity in theory (escardio). Some factors

  • Sensitivity
  • Gender
  • Medical treatment

Benchmark: RTG dental + wrist uncertainty $pm$2 years. Etc tooth washing, and hardness of drinking water affect much the dental health. Exposure to the ionisative radiation is problematic so less radiative approach would be nice.

ECG

Cardiac characteristics by ECG signal in power spectrum, having greater uncertainty than RTG dental 2016 because no sufficient exclusions/inclusions and too much exclusions covering data segments of autonomic nervous systems. Processes to be excluded from the ECG signal

  • respiratory process
  • heart beat itself
  • some cellular-tissue processes

Some cellular-tissue processes

  • ROS determination…
  • some genes about lipids and their characteristics with ageing in metabolic syndrome
  • decreased replication of cellular ageing
  • DNA replication stress, genome instability and aging
  • MICCAI 2012 Workshop on Novel Imaging Biomarkers for Alzheimer's Disease and Related Disorders

Low-Cost Targets

  • internal physiology of cardiac events in association with…
  • cardiac and brain imaging makers related to substance use disorder, HIV, and aging (Stanford)

How can you determine aging by biochemical test and imaging?


Inclusion

The orthodoxy of sameness and the orthodoxy of the mean, which has dominated much of the thinking in medical science &hellip often impaired our attitude toward clinical research in those days&mdashwe tended to want to reduce the human to that 60 kilogram white male, 35 years of age, and make that the normative standard&mdashand have everything extrapolated from that tidy, neat mean, &ldquothe average American male.&rdquo
&mdashDr. Bernadine Healy, former director of the National Institutes of Health

Since the mid-1980s, specific ideas about what it means for humans to differ have refashioned medical research and practice in the United States. Two decades of reform&mdashreflected in policies about who gets studied and how they are studied&mdashhave placed group identity and group difference squarely in view within the biomedical arena. Socially significant categories and characteristics such as sex and gender, race and ethnicity, and age, used routinely when people assert their belonging or are classified by others, have taken on a new salience within modern medicine.

That these socially meaningful aspects of personhood divide humanity into medically distinguishable populations has become a commonplace assertion, even a cliché:

  • &ldquoMen and Women Are Different,&rdquo declared the title of an editorial in a medical journal in 2004. &ldquoSex differences have been noted in most major cardiovascular diseases,&rdquo the author observed, and &ldquomedicine is not exempt from the basic biological fact that men and women are indeed different, and may need to be treated therapeutically as such.&rdquo According to the New York Times, researchers &ldquohave found that men and women sometimes report different symptoms of the same disease, and that certain drugs are more effective in one sex than the other, or produce more severe side effects in one sex.&rdquo
  • Television news broadcaster Peter Jennings reported in 2002 that hundreds of children die each year from reactions to medications. Because the drugs have not been tested in pediatric populations, these children have become the victims of &ldquoguesswork&rdquo in the determination of proper dosages.
  • In an article about how medical researchers were seeking members of racial minority groups to participate in clinical trials, the Charleston Post and Courier quoted a registered nurse and oncology research coordinator in 2005: &ldquoWhen you do a randomized trial with an all-white population, you can only extrapolate to the white population. . . . You don&rsquot know if it actually works in African-American or Hispanic populations.&rdquo
  • In 2005, the U.S. Food and Drug Administration (FDA) licensed a pharmaceutical drug called BiDil for treatment of heart failure in African American patients only. Having failed to demonstrate the drug&rsquos efficacy in the overall population, BiDil&rsquos manufacturers reinvented it as an &ldquoethnic drug&rdquo and tested it only on African Americans.

Characteristic of this way of thinking is the assumption that social identities correspond to relatively distinct kinds of bodies&mdashfemale bodies, Asian bodies, elderly Hispanic male bodies, and so on&mdashand that these various embodied states are medically incommensurable. Knowledge doesn&rsquot travel across categories of identity&mdashat least, we can&rsquot presume that it does. We are obliged always to consider the possibility that the validity of a medical knowledge claim stops dead when it run up against the brick wall of difference. While some experts, policymakers, and health advocates have embraced this way of thinking about bodies, groups, and health as obviously valuable, and others have dismissed it as pernicious or silly, my goal is to do neither of the above. I seek to understand, first, how a particular way of thinking about medical difference in the United States helped give rise to an important strategy to improve medical research by making it more inclusive. Second, I intend to show how this strategy gained supporters, took institutional form, and became converted into common sense. Third, I want to shed light on its various consequences for government agencies, biomedical researchers, and pharmaceutical companies, as well as for the social groups targeted by new policies. And finally, by comparing this approach to other ways of thinking about the meanings of identities, differences, and inequalities in biomedical contexts, I aim to understand the extent to which the new common sense might lead to better health and a more just society, as well as the extent to which it either falls short or takes a wrong turn.

An evaluation of the merits of this emphasis on bodily difference might begin by considering a second set of recent claims:

  • According to the U.S. National Center for Health Statistics, out of every 1,000 babies born to U.S. mothers whose race was identified as &ldquoWhite&rdquo or &ldquoAsian or Pacific Islander&rdquo or whose ethnicity was &ldquoHispanic or Latino,&rdquo fewer than 6 died within the first year of life. By contrast, the infant mortality rate (the number of deaths per 1,000 live births) was 8.6 for &ldquoAmerican Indian or Alaska Native&rdquo mothers, and 13.8 for &ldquoBlack or African American&rdquo mothers.
  • According to the same source, a white woman born in the United States in the early twenty-first century could expect, on average, to live 11.5 years longer than an African American man.
  • On the basis of a public opinion poll, researchers from Harvard University reported in 2003 that two-thirds of black people in the United States believe that the health care they receive is inferior to that of whites. One in five white respondents agreed with them. Eight out of ten blacks in the study attributed the substandard care to bias, intended or otherwise, on the part of physicians. Only one in five white respondents thought this was the case.

Reports of health disparities&mdashinequalities with regard to health status, access to health care, or experiences within the health care system, measured according to factors such as race, class, gender, geographic location, and sexual identity&mdashhave become ubiquitous in recent years. While there is a general sentiment that they constitute a significant social problem, the precise meaning of these disparities has been a matter of some debate. Concern over these disparities also coincides with growing frustration about other problems: the plight of the more than 40 million Americans who lack health insurance (a problem unique to the United States among countries within the so-called developed world), the high price of pharmaceutical drugs, and the quality and character of health care as organized and rationed under the system known as managed care.

Will the new focus on embodied difference lead to the elimination of health disparities? To some extent, the new emphasis that takes categories of human difference as basic units of analysis in medical research and medical treatment has coincided and cooperated with research on these disparities. But in other respects, this way of attending to difference&mdashequating group identities with medically distinct bodily subtypes&mdashhas precluded direct attention to reducing inequalities in the domain of health, while encouraging the misleading notion that better health for all can best be pursued through study of the biology of race and sex.

ONE SIZE FITS ALL?

Today&rsquos pronouncements about medical differences are often accompanied by self-conscious reflection on social change within biomedical institutions&mdashindeed, by strenuous criticism of past deficiencies. For years, a range of health activists from outside the establishment have issued stinging critiques of neglect by researchers of women, racial and ethnic minorities, and others who have fallen beneath the radar screen.

Well-established biomedical insiders also have had their say. Bernadine Healy, who served as the first (and so far only) female director of the National Institutes of Health (NIH) from 1991 to 1993, commented in 2003 on the worldview that prevailed until relatively recently among medical researchers. When Healy denounced &ldquothe orthodoxy of sameness and the orthodoxy of the mean&rdquo (see the epigraph at the start of this chapter), she targeted a double whammy of biomedical insensitivity: not only were groups such as women, children, the elderly, and racial and ethnic minorities routinely under-studied in clinical research, but it was assumed that the absence of these groups didn&rsquot matter much, because the findings from studying the &ldquonormative standard&rdquo&mdashmiddle-aged white men&mdashcould simply be generalized to the entire population. Yet the more that researchers have included distinct groups among research subjects, critics have argued, the more it has become apparent that differences do matter and that we cannot just extrapolate medical conclusions from white people to people of color, from men to women, or from middle-aged adults to children or the elderly.

These are not isolated sentiments. Since the mid-1980s, an eclectic assortment of reformers has argued that expert knowledge about human health is dangerously flawed&mdashand health research practices are fundamentally unjust&mdashbecause of inadequate representation of groups within research populations in studies of a wide range of diseases. The critics have included prominent elected officials, like former member of Congress Patricia Schroeder, who, as cochair of the Congressional Caucus for Women&rsquos Issues, asked &ldquoWhy would NIH ignore half the nation&rsquos taxpayers?&rdquo Voices calling for change also have come from the ranks of grassroots advocacy groups, clinicians, scientists, professional organizations, and government health officials.

Collectively, reformers have pointed to numerous culprits in the general failure to attend to biomedical difference, but in their bid to change them, they primarily have targeted the state. Reformers have trained their attention on the U.S. cabinet-level Department of Health and Human Services (DHHS) and especially two of its component agencies: the NIH, the world&rsquos largest funder of biomedical research, currently providing about $27 billion annually in research grants and the Food and Drug Administration (FDA), the gatekeeper for the licensing of new therapies for sale. Under pressure from within and without, these federal agencies have ratified a new consensus that biomedical research&mdashnow a $94 billion industry in the United States&mdashmust become routinely sensitive to human differences, especially sex and gender, race and ethnicity, and age. Academic researchers receiving federal funds, and pharmaceutical manufacturers hoping to win regulatory approval for their company&rsquos products, are now enjoined to include women, racial and ethnic minorities, children, and the elderly as research subjects in many forms of clinical research measure whether research findings apply equally well to research subjects regardless of their categorical identities and question the presumption that findings derived from the study of any single group, such as middle-aged white men, might be generalized to other populations.

These expectations are codified in a series of federal laws, policies, and guidelines issued between 1986 and the present that require or encourage research inclusiveness and the measurement of difference. The new mandate is reflected, as well, in the establishment, from the early 1980s forward, of a series of new offices within the federal health bureaucracy these include offices of women&rsquos health and offices of minority health that support research initiatives focused on specific populations. Versions of the inclusionary policies also have been adopted by the &ldquoinstitutional review boards&rdquo (IRBs) located at universities and hospitals across the United States&mdashthe committees that review the ethics of proposals to conduct research on human subjects. As a result, these policies affect not just those researchers seeking federal support or those companies seeking to market pharmaceuticals they may apply, in some fashion or another, to nearly every researcher in the natural or social sciences performing research involving human beings.

In other words, if indeed we are witnessing a repudiation of so-called one-size-fits-all medicine in favor of group specificity, then the shift is apparent not just in the realm of free-floating ideas. It is anchored to institutional changes&mdashnew policies, guidelines, laws, procedures, bureaucratic offices, and mechanisms of surveillance and enforcement&mdashthat are the products of collective action. These changes matter for those who carry out medical research on humans: researchers are obliged to alter their work practices to comply with new requirements if they want to get funding, and so must pharmaceutical companies, if they seek to get their products on the market. But the changes also matter downstream: they may affect any person who, now or in the future, becomes obliged to claim the status of &ldquopatient.&rdquo More diffusely, but importantly, they also matter insofar as they alter social understandings of what qualities such as race and gender are taken fundamentally to be.

Yet this redefinition of U.S. biomedical research practice has been little remarked upon by social scientists. Several scholars have provided excellent accounts or analyses of recent attempts to include greater numbers of women in biomedical research. In addition, an important and growing body of literature by science studies scholars, while not precisely focused on questions of research inclusion, is analyzing how concepts of race are used in biomedicine&mdashespecially, new scientific attempts to take findings from the genetic study of populations and use them to make claims about the medical meaning of race. However, there has been almost no scholarly attention to the broad-scale attempt to dethrone the &ldquostandard human&rdquo and mandate a group-specific approach to biomedical knowledge production&mdashan identity-centered redefinition of U.S. biomedical research practice that encompasses multiple social categories.

I call this set of changes in research policies, ideologies, and practices, and the accompanying creation of bureaucratic offices, procedures, and monitoring systems, the &ldquoinclusion-and-difference paradigm.&rdquo The name reflects two substantive goals: the inclusion of members of various groups generally considered to have been underrepresented previously as subjects in clinical studies and the measurement, within those studies, of differences across groups with regard to treatment effects, disease progression, or biological processes.

This way of thinking and doing is by no means the only, or the most important, way in which biomedical research has changed in recent decades. During those same years, as the sociologist Adele Clarke and her coauthors have noted, medicine itself has been remade &ldquofrom the inside out&rdquo&mdashthrough innovations in molecular biology, genomics, bioinformatics, and new medical technologies through vast increases in public and private funding for biomedical research through the ascendance of evidence-based medicine through the rapid expansion of a global pharmaceutical industry constantly searching for new markets and engaging in new ways with consumers and through the resurgence of dreams of human enhancement or perfectibility by means of biotechnologies. The point, then, is not to understand how the inclusion-and-difference paradigm has changed &ldquomedicine,&rdquo as if the latter were a fixed target, but rather to consider how this particular emphasis has intersected with the other transformations that have taken place in the domain of biomedical research and the health care sector generally.

The Time and Place of Difference

Although a shift away from the inclusion-and-difference paradigm could certainly occur, at present this model is reasonably well institutionalized within DHHS agencies. Unlike policies that depend for their survival on the support of a particular politician, bureaucrat, or political party in power, the inclusion-and-difference paradigm has sunk roots and seems to have developed its own staying power. It grew up in the Republican administrations of the 1980s and early 1990s, flourished under the Democratic administration of President Bill Clinton, and mostly has survived&mdashdespite some explicit attempts to roll it back and halt its expansion&mdashunder Republican President George W. Bush.

Interestingly, formal policies concerning inclusion and difference in biomedicine are mostly restricted to the United States&mdashat least so far. Biomedical research and pharmaceutical drug development are increasingly global industries that crisscross national borders, and it is not unreasonable to imagine that policies promoted by a dominant player will diffuse gradually to other countries or that those countries, on their own, will adopt similar institutional responses. To date this has happened to a limited extent, and not without resistance. This peculiarity explains why I focus on the United States, a narrowing of gaze that otherwise might seem surprising when tracking a global industry. I argue that the nation-state, as well as national political struggles, remain powerful contributors to the definitions of medical and social policies, categories, and identities. Many of the policies that I consider are, if not specific to the United States, then applicable only to those persons or firms seeking U.S. federal funding or regulatory approval. However, given the prominence of the United States in this arena&mdashorganizations headquartered in the United States account for about 70 percent of the global drug development pipeline&mdashthe consequences of U.S. policies for the rest of the world are not insubstantial. And the general questions concerning the medical management of difference have implications for every country that engages in social and technological practices of differentiation and difference-making across human subgroups&mdashthat is to say, all of them.

The nation-specific character of this response to difference also has important implications for the framing of the analysis. To the extent that these concerns appear at present to have a special resonance in the United States, then it would not make sense to attribute their emergence into public debate to any inexorable law of scientific or social progress. Instead, the approach will be to look closely at U.S. culture, politics, and history and the particularities of U.S. biomedical and political institutions to explain why debates about identity and difference have left such a distinctive and indelible mark on biomedicine in this country in the late twentieth and early twenty-first centuries. Rather than treating the inclusion-and-difference paradigm as an obvious scientific development, this analysis examines why new understandings about research and human differences have emerged in the United States and supplanted the common sense that prevailed previously.

Of course, it is not hard to imagine why appeals to include women, minorities, and other groups in biomedical research might acquire traction in the United States in recent years. In the wake of what has been called the &ldquominority rights revolution,&rdquo U.S. political culture now typically promotes equality of opportunity and diversity as worthy social goals, though remedies such as affirmative action have been under increasing attack. And the idea of the United States as a multicultural society has become much more taken for granted, even in the face of resistance. Compared to other countries, the United States is also typically seen as a place where &ldquoidentity politics&rdquo&mdashthe assertion of political claims in relation to social identities such as &ldquowoman,&rdquo &ldquoLatino,&rdquo or &ldquoNative American&rdquo&mdashlooms large. Even though the phenomenon that the sociologist John Lie has called &ldquomodern peoplehood&rdquo&mdashthe formation of &ldquoan inclusionary and involuntary group identity with a putatively shared history and a distinct way of life&rdquo&mdashis everywhere present, certain countries, such as the United States, are more likely to establish policies with respect to these categories, while others seek instead to subsume difference under a broader conception of national citizenship. Finally, given the particular prominence and cultural authority of the biosciences in the United States, it seems not unlikely that this country would witness the emergence of what might be called &ldquobio-multiculturalism.&rdquo

Yet this program for the medical recognition of difference has gone against the grain of powerful trends toward standardization within biomedicine during the same recent decades in the United States&mdashuniversalizing tendencies reflected in the movement to develop uniform, evidence-based guidelines for patient care, as well as efforts by both the FDA and the pharmaceutical industry to standardize the drug approval process across national borders. And conversely, the focus on broad social categories, such as women, also has contrasted with the alternative ideology of personalized medicine, the plan to target therapies at the individual. Thus, when viewed against the backdrop of dominant tendencies within biomedicine&mdashemphases on the universal and the individual&mdashthe group-based inclusion-and-difference paradigm would seem to lie betwixt and between.

Moreover, the implementation of new inclusionary policies and practices encountered concrete resistance on multiple fronts. Some critics rejected the empirical claim that groups such as women in fact had been under-studied. Defenders of scientific autonomy opposed the politicizing of research and argued that it should be up to scientists, not policymakers, to determine the best ways to conduct medical experiments. Conservatives decried the intrusion of &ldquoaffirmative action,&rdquo &ldquoquotas,&rdquo and &ldquopolitical correctness&rdquo into medical research. Ethicists and health activists expressed concern about the risks of subjecting certain groups, such as children, to the risks of medical experimentation in large numbers. Statisticians and experts on the methodology of the randomized clinical trial argued that requiring comparisons of population subgroups was not only scientifically unsound but also fiscally unmanageable and that it might bankrupt the research enterprise. And many proponents of medical universalism argued that biological differences are less medically relevant than fundamental human similarities: when it comes right down to it, they insisted, people are people. Claims about racial differences, in particular, seemed to sit poorly alongside well-publicized findings by geneticists that, on average, genetic differences within the groups commonly called races are actually greater than the genetic differences between those groups. If racial classifications are biologically dubious, why were legislators and health policymakers calling for labeling research participants by race and testing for racial difference in clinical studies? Some critics went further, charging that the new medical understandings of race and sex differences were eerie echoes of social prejudices from the past, when scientific reports of bodily differences had provided a veneer of respectability to claims that both women and people of color were not just socially, but biologically, inferior.

Given these varied arguments against the new policies and the logic behind them, we should not take for granted the rise of inclusion and the measurement of difference&mdashstill less the particular forms these have taken. The birth and maturation of this paradigm require explanation. Indeed, the more we examine the new inclusionary policies, the less obvious they appear&mdashand hence the more we can learn by studying them in depth.

For example, the whole premise of the reforms is to reverse a past history of exclusion and inequality&mdashbut, as is so often the case, &ldquohistory&rdquo here is a contestable matter. To what degree can it be established that medical research used to focus on middle-aged, white men and took them to be the norm or standard? And how much have research practices really changed in response to the new policies? Has there been a revolution in medical knowledge-making? Another set of questions concerns the unexamined choices embedded within the inclusionary remedy: Out of all the ways by which people differ from one another, why should it be assumed that sex and gender, race and ethnicity, and age are the attributes of identity that are most medically meaningful? Why these markers of identity and not others? And are there differences among these types of difference, such that the same policy remedies may not be appropriate for each case? In the most general sense, how can we know when to assume that any particular way of differing might have medical consequences? And when is it proper to invoke the unity of the human species&mdashto assert that a body is a body is a body?

Pros and Cons

At least in part, this wave of reform offers an important and valuable corrective to past medical shortsightedness. It exemplifies the more general point, made by feminist theorists and theorists of multicultural citizenship, that sometimes the pursuit of genuine social equality requires policies that do not treat everyone the same&mdashpolicies that affirm group rights and establish new practices of group representation. These reforms also are broadly consistent with the important perception, expressed variously by feminist theorists and science studies scholars, that the formal knowledge of experts sometimes may be improved through the contributions or redirections introduced by those who have been made marginal to the knowledge production enterprise.

But an emphasis on difference-making also rightly invokes concern when difference essentially is taken to be a biological attribute of a group. These, too, are very contemporary preoccupations: in 2005, Lawrence Summers, the president of Harvard University, ignited a fiery debate when he wondered aloud whether the underrepresentation of women in science and engineering professions might actually reflect innate differences between the sexes. Attempts to treat racial differences as biologically based&mdashas in, for example, the claim that I.Q. tests or other standardized tests track natural differences in mental ability between racial groups&mdashlikewise have proven resilient, though they, too, have been the subject of much criticism. To the degree that the inclusion-and-difference paradigm also suggests&mdashalbeit in a nonpejorative way&mdashthat biology is fundamental in distinguishing races and genders, its logic appears consistent with these other rhetorical moves.

How then, should the inclusion-and-difference paradigm be evaluated? If it were a simple matter of declaring these changes &ldquogood&rdquo or &ldquobad,&rdquo the case would be far less interesting than it turns out to be. My strategy will be to link an investigation of the causes and consequences of these new policies and practices with a detailed analysis of their associated cultural and political logic, including ways of standardizing and classifying human beings, beliefs about the meaning of difference, and possibilities for establishing &ldquobiopolitical citizenship.&rdquo On the basis of that analysis, I will argue that although reformers&rsquo characterizations of the biomedical status quo ante were not entirely accurate, they nonetheless did bring attention to a real and important problem. And the solutions that have fallen into place, while imperfectly designed, have in some respects been positive and praiseworthy from the standpoint of both improved health and social justice&mdasheven if a formalistic emphasis on compliance with rules sometimes has obscured or interfered with the substantive goals that originally animated the reforms.

However, I also will argue that these reforms have unintended consequences that merit especially close study. By approaching health from the vantage point of categorical identity, they ignore other ways in which health risks are distributed in society. By valorizing certain categories of identity, they conceal others from view. By focusing on groups, they obscure individual-level differences, raising the risk of improper &ldquoracial profiling&rdquo or &ldquogender profiling&rdquo in health care. By treating each of the recognized categories in a consistent fashion, they often ignore important differences across them. And by emphasizing the biology of difference, they encourage the belief that qualities such as race and gender are biological in their essence, as well as the mistaken conclusion that social inequalities are best remedied by attending to those biological particularities. While the inclusion-and-difference paradigm is certainly preferable to any narrow biomedical practice of exclusion, and while it may generate useful knowledge for specific purposes, the net effect of these unintended consequences is to make it a problematic tool for eliminating health disparities. Rather than tackle the problem of health disparities head on, we have adopted an oblique strategy that brings with it a new set of difficulties.


Introduction

Research on the biology of ageing has been conducted for centuries. Survival curves showing the surviving proportion of a population versus time are an intuitive means of illustrating the whole lifespan of a group of organisms and remain a key component of ageing research. Various anti-ageing interventions have been demonstrated to extend the lifespan of model organisms ranging from nematodes to fruit flies to rodents 1,2,3,4 , with contradictory reports in rhesus monkeys 5 . These interventions have mainly included calorie restriction (CR), genetic manipulations, and pharmaceutical administration 1,6 .

However, whether these interventions extend the lifespan via universal or distinct patterns remains unclear. Traditionally, in ageing research, survival data from lifespan experiments are mainly analysed in the original study, and data are not collected and stored together. Meta-analyses 7 are mainly limited to either sufficiently large subsets of survival data acquired under identical conditions or the application of methods accounting for varying additional factors. The published meta-analyses of survival data have mostly assessed CR 8,9,10 . For example, reportedly, CR significantly extends lifespan, and the proportion of protein intake is more important for lifespan extension than the degree of CR 9 . No study has demonstrated whether CR, genetic manipulation or pharmaceutical administration is superior at extending lifespan and delaying ageing.

Here, we attempted to resolve this question by conducting a comprehensive and comparative meta-analysis of the effect patterns of these different interventions and their corresponding mechanisms via survival curves. We have focused our analyses on Caenorhabditis elegans and Drosophila, powerful model systems that are widely used in ageing research. We developed an algorithm that enabled us to combine multiple strains of these species from a large number of studies and to extract general trends from relevant results. Our main aims were as follows: (i) to investigate the effect patterns of different anti-ageing interventions on survival curves and to identify the most effective and healthiest interventions (ii) to determine whether the effect on longevity is conserved between C. elegans and Drosophila and (iii) to uncover the pattern of potential anti-ageing mechanisms between different interventions. Our re-analysis of survival data using this new method highlights the overall advantages of CR in delaying ageing and provides a direction for the discovery of effective anti-ageing strategies.


Methods

Sample Population (Boston, MA)

The samples used for this study were obtained from a previously reported double-blind, randomized study that consisted of a 4-week control period, 20-week treatment period and 16-week recovery period [11, 13]. Participants included sixty young men (age range 18 to 35 years) and sixty-one older men (age range 60 to 75 years). All subjects provided informed written consent according to protocol approved by the Charles Drew University and Research and Education Institute. Exclusion criteria included 1) presence of prostate disease defined as cancer, an American Urological Association symptom score of greater than 7, a prostate-specific antigen level greater than 4 ng/ml, 2) hematocrit above 48%, 3) diabetes mellitus, 4) heart problems including myocardial infarction or congestive heart failure all measured using a 12-lead electrocardiogram monitoring to exclude symptoms present during exercise as well, 5) severe sleep apena, 6) administration of androgenic steroids in the past year, 7) participation in sports events, resistance training or moderate to heavy endurance exercise training and 8) baseline testosterone levels below 300 ng/dL. For more in depth description of enrollment criteria and physical function, see Bhasin et al, in [11, 13]. Stored serum samples at baseline and after treatment were used from 20 of the younger men and 19 of the older men based on availability. Mean baseline testosterone levels for younger men were 586 ng/dL and 358 ng/dL for older men.

Sample Population (Houston, TX)

Stored baseline serum samples were used from 20 older men (age range 60 to 85 yrs) recruited through the Sealy Center of Aging Volunteer Registry at the University of Texas Medical Branch (UTMB) in Galveston, TX for inclusion in a randomized double-blinded placebo-controlled testosterone intervention study. All subjects provided informed written consent according to the guidelines established by the UTMB institutional review board and were medically screened. Qualified subjects had endogenous testosterone concentrations below 500 ng/dL and were otherwise healthy. To assess medical eligibility, subjects underwent a battery of tests including a history and physical examination, complete blood count, metabolic panel including fasting serum glucose and insulin, an electrocardiogram (ECG), plasma electrolytes, prostate specific antigen (PSA), liver and renal function and lipid panel. Subjects were included based upon their ability to provide regular transportation to the Clinical Research Center (CRC) at UTMB. Subject exclusion criteria included the following: 1) serum testosterone > 500 ng/dL, 2) indication of cardiovascular disease or heart problems assessed via a resting ECG and a Bruce protocol exercise stress test, 3) previous history of angina or myocardial infarction, 4) PSA > 4.0 μg/L, 5) history of prostate cancer, 6) history of severe benign prostatic hypertrophy, 7) LDL > 200 mg/dL, 8) hematocrit > 51%, 9) hypertension (>140/90 mmHg), 10) BMI > 35, 11) history of hepatitis or 3 × elevation of Alk phos, ALT, AST, 12) illnesses including diabetes, cancer, COPD, sleep apnea or any other causing disability, 13) bone related disorders, 14) DEXA lumbar score > -2.5, 15) currently taking Coumadin, glucocorticoids, androgens, or anti-bone-resorptive agents, and 16) regular physical exercise. These inclusion/exclusion criteria reflect those recommended by the Clinical Guidelines Subcommittee Task Force of The Endocrine Society[26] and previously published trials with testosterone and older men [27, 28]. Mean baseline testosterone levels for these older men were 320 ng/dL.

Testosterone supplementation (Boston, MA)

Serum samples were obtained from men who participated in a randomized testosterone supplementation trial. Men were treated with monthly injections of a long-acting GnRH agonist (Lupron depot, 7.5 mg TAP, North Chicago, IL) to suppress endogenous testosterone production, and concomitantly weekly injections of one of five doses of testosterone enanthate (Delastryl, Savient Pharmaceuticals, NJ) [11]. Based on dichotomous functional outcomes in previous reports, testosterone doses were categorized as low (i.e., 25 mg, 50 mg, and 125 mg) and high (i.e., 300 mg and 600 mg).

Biomarker measurements

The serum specimens were selected based on quality and availability. Quality was determined by visual inspection. Serum specimens for both populations had been collected, centrifuged and stored at similar conditions in both places. Serum factors were measured at two time intervals: early in the study, i.e., at baseline or within the first two weeks of starting GnRH and testosterone treatment, and later in the study, i.e., twenty weeks after initiation of GnRH and testosterone treatment. Insulin-like growth factor I (IGF1) was measured using an enzyme-linked immunosorbent assay (ELISA) using a non-extraction IGF-1 ELISA kit (Diagnostic Systems Laboratories, TX) in both young and old at baseline and after treatment. Pro-collagen III N-terminal peptide (PIIINP) was measured using validated equilibrium radioimmunoassay (RIA) (Orion Diagnostics, Espoo, Finland) as described previously[15, 29] in both the young and the old subjects at baseline and after testosterone supplementation.

The remaining serum factors were measured using a multiplex Luminex platform (Panomics, Fremont, CA) as described previously [30]. This assay uses xMAP technology, a multi-analyte profiling Luminex technology, to detect and quantify multiple protein targets. The samples were run on a LiquiChip (Qiagen) and were analyzed using Qiagen Liquichip Analyzer software (Version 1.0.5.17455). A 35-plex was run on serum from the young men measuring ENA78, Eotaxin, FGF Basic, G-CSF, GM-CSF, GRO-α, IFNγ, IL1α, IL1β, IL-10, IL-12(p40), IL-12(p70), IL-13, IL-15, IL-17, IL-17F, IL-1RA, IL-2, IL-4, IL-5, IL-6, IL-7, IL-8, IP10, Leptin, MCP-3, MIG, MIP1α, MIP1β, NGF, PDGF-BB, RANTES, TNFα and TNFβ at baseline and after testosterone treatment. For those biomarkers that showed a change with testosterone treatment in the young, another 3-plex was run on serum from the older men treated with testosterone at baseline and after treatment measuring leptin, MIG and ENA78. A 30-plex assay was run to measure baseline cytokine levels of a separate group of older men before treatment measuring eotaxin, FGF basic, GCSF, GM-CSF, GROα, IFNγ, IL1α, IL1β, IL-10, IL-12(p40), IL-12(p70), IL-13, IL-17A, IL-2, IL-4, IL-5, IL-6, IL-7, IL-8. IP-10, MCP-1, MCP-3, MIP-1α, MIP-1β, NGF, PDGF-BB, RANTES, TNFα, TNFβ, and VEGF. Values outside of the range of the standard curve were omitted for the multiplex assays. The lower limit of detection for these analytes was 1 pg/ml. The CV range for inter-assay variability for the analytes was 6.73% - 17.25%, with an average of 12.24%.

Statistical Methods

Baseline values of biomarkers were compared between younger and older men using a parametric two-sample t-test assuming unequal variances and a non-parametric two-sample Wilcoxon rank-sum test giving similar p-values. The determination of significance in biomarker response was based on a matched pair analysis of early versus late levels, and a bivariate categorical analysis of testosterone dose (low versus high) and age (younger men versus older men), using a parametric two-sample t-test with unequal variances and a non-parametric two-sample Wilcoxon rank-sum test with similar results. Statistical analyses were carried out in STATA version 8.0 (Stata Corp, College Station, Tex) and JMP 8.0.2 (SAS Institute Inc, Cary, NC). Values shown are all displayed as the mean plus/minus standard deviation unless otherwise indicated. Box plots are shown as quantiles, with the median (line in box), quartile range (edges of box), and extremes (vertical lines and points). In single-plex assays, siginificance was set at p < 0.05. In multiplex assays, a p-value < 0.05 was used to denote statististical significance while a p-value < 0.05/10 = 0.005 was used to take into account multiple comparisons using Bonferrorni correction.


Purpose

The transcription factor nuclear erythroid-2 like factor-2 (Nrf2) is the master regulator of antioxidant defense. Data from animal studies suggest exercise elicits significant increases in Nrf2 signaling, and that signaling is impaired with aging resulting in decreased induction of phase II detoxifying enzymes and greater susceptibility to oxidative damage. We have previously shown that older adults have lower resistance to an oxidative challenge as compared to young, and that this response is modified with physical fitness and phytonutrient intervention. We hypothesized that a single bout of submaximal exercise would elicit increased nuclear accumulation of Nrf2, and that this response to exercise would be attenuated with aging.

Methods

Nrf2 signaling in response to 30-min cycling at 70% VO2max was compared in young (23±1y, n=10) and older (63±1, n=10) men. Blood was collected at six time points pre-exercise, and 10 min, 30 min, 1 h, 4 h, and 24 h post-exercise. Nrf2 signaling was determined in peripheral blood mononuclear cells by measuring protein expression by western blot of Nrf2 in whole cell and nuclear fractions, and whole cell SOD1, and HMOX, as well as gene expression (RT-PCR) of downstream Nrf2-ARE antioxidants SOD1, HMOX, and NQO1.

Results

Baseline differences in protein expression did not differ between groups. The exercise trial elicited significant increase in whole cell Nrf2 (P=0.003) for both young and older groups. Nuclear Nrf2 levels were increased significantly in the young but not older group (P=0.031). Exercise elicited significant increases in gene expression of HMOX1 and NQO1 in the young (P=0.006, and P=0.055, respectively) whereas gene expression in the older adults was repressed. There were no significant differences in SOD1 or HMOX1 protein expression.

Conclusion

These findings indicate a single session of submaximal aerobic exercise is sufficient to activate Nrf2 at the whole cell level in both young and older adults, but that nuclear import is impaired with aging. Additionally we have shown repressed gene expression of downstream antioxidant targets of Nrf2 in older adults. Together these translational data demonstrate for the first time the attenuation of Nrf2 activity in response to exercise in older adults.


Results

Seventy-nine patients with Thalassemia Major were enrolled in the study ranging from ages 9 years to 47 years. Ethnic background reflected the diversity of the American West Coast, 20 Chinese, 16 non-Chinese Far East, 14 Indian/Pakistani, 16 Mediterranean, and 13 other assorted backgrounds. Genotypes were not available in all subjects, but were predominately beta thalassemia major, with a few E-beta thalassemia patients and two known alpha thalassemia patients. All patients were clinically well, with no history of heart failure or arrhythmias in the previous year and none were on cardiac medications. Transfusion and chelation duration were 18.1 ± 9.0 years and 22.2 ± 8.7 years, respectively. Median transfusion interval was 3 weeks, with no patient transfused less frequently than every 4 weeks. Half of the patients were taking deferoxamine and half had switched to deferasirox when it became available in November, 2005. One patient was taking deferiprone and two were taking combination deferiprone and deferoxamine. Iron burdens were independent of chelation treatment. One patient was excluded because the MRI and ECG studies were more than 6 months apart. There were 45 patients with cardiac T2* values less than 20 ms, indicating MRI-detectable cardiac iron overload, and 33 patients with cardiac T2* greater than 20 ms (Table I). Patients with cardiac iron were an average of 5 years older and more likely to be female. Age and gender were interdependent there were no living male patients older than 30 years with cardiac iron compared with 13 females, raising the possibility of a survival bias [ 4 , 23 ].

Parameter T2* greater than 20 T2* less than 20 P value
Male 21 14 0.004
Female 12 31 0.004
Age (years) 21.6 ± 8.7 26.1 ± 8.2 0.026
BSA (m 2 ) 1.49 ± 0.25 1.54 ± 0.23 0.417
Hemoglobin (g/dl) 11.9 ± 1.9 11.8 ± 1.5 0.77
Ferritin (μg/L) 2701 ± 2511 4170 ± 4491 0.07
Iron (μg/dL) 212.1 ± 77.2 219.8 ± 100.1 0.71
BNP (ng/L) 14.1 ± 17 31.4 ± 43.0 0.063
LDH (U/L) 456.4 ± 299.1 386.5 ± 220.7 0.32
hs-CRP (mg/L) 1.1 ± 1.1 2.9 ± 4.9 0.023
HR at MRI (bpm) 79 ± 9.5 78 ± 10.6 0.53
SBP (mmHg) 107.4 ± 11.6 107.9 ± 10.2 0.85
DBP (mmHg) 63.5 ± 10.2 67.5 ± 9.2 0.084
MAP (mmHg) 82.0 ± 10.7 83.7 ± 9.4 0.47
MRI CI (L/min/m 2 ) 4.1 ± 0.6 3.8 ± 0.9 0.06
MRI LVEF (%) 63.1 ± 4.9 61.2 ± 6.4 0.15
MRI RVEF (%) 60.7 ± 6.5 61.0 ± 6.6 0.87
MRI LVMESi (g/m 2 ) 64.6 ± 20.7 64.3 ± 18.9 0.91
Liver Iron Content (mg/g) 11.1±12.9 14.1±9.8 0.26
  • Data are expressed as mean ± one standard deviation. BSA, body surface area SBP, systolic blood pressure DBP, diastolic blood pressure MAP, mean arterial pressure CO, cardiac output LVEF, left ventricular ejection fraction RVEF, right ventricular ejection fraction LVMES, left ventricular mass at end systole.

Both groups were heavily iron overloaded but liver iron, ferritin, and serum iron were not significantly different between the two groups. High sensitivity CRP, a non-specific marker of systemic inflammation was increased in patients having cardiac iron overload. Patients having low risk hs-CRP, by American Heart Association classification (<1.0 mg/L), had normal heart rate (Z-score 0.02 ± 0.84), while patients with higher hs-CRP had heart rates nearly one standard deviation above average values for the reference population (Z-score 0.90 ± 0.97, P < 0.0002). Twenty-four hour mean HR on Holter recording was correlated with HR on the resting ECG (r 2 = 0.32, P < 0.0001) but an average of 7 beats per minute faster (P < 0.0001) mean HR was also increased in patients with elevated hs-CRP. Cardiac systolic function was not significantly different between the two patient groups. However LV dysfunction (MRI LVEF less than 56%) was observed in six patients five of six had severe iron deposition (T2* less than 10 ms). The hemoglobin levels were similar between both groups of patients.

Table II shows the ECG characteristics between patients with Thalassemia Major and iron overload (T2* less than 20 ms) versus those without iron overload (T2* greater than 20 ms). Since cardiac function and hemoglobin were similar, ECG changes between the two groups most likely represented preclinical effects of cardiac iron. Repolarization indices were the most sensitive discriminators. QT interval was greater in patients having cardiac iron deposition (Fig. 1A). ROC analysis demonstrated an AUROC of 0.68 for the presence of cardiac iron with an optimal cutoff of 407 ms (horizontal line). The group differences persisted after correcting the QT for heart rate by two validated methods (Bazzett's & Fridericia method) but discrimination was worse. Ten patients had QTc prolongation (450 ms for men, 460 ms for women) but only seven had cardiac iron (P = 0.39). Figure 1B demonstrates the interaction between QT interval and beat-to-beat (RR) interval horizontal line indications the 407 ms cutoff and curved lines represent power-law fits to the QT-RR relationship for both patient groups. Repolarization appears to prolong to a greater extent at low heart rate (long RR) in patients having cardiac iron, although curve-fit parameters were not statistically different from one another.

ECG Parameter T2* greater than 20 T2* less than 20 P value
PR (ms) 147.1 ± 14.4 151 ± 22.5 0.36
Pduration (ms) 96.6 ± 10.6 93.3 ± 9.1 0.31
QRS (ms) 87.1 ± 10.6 90.4 ± 9.1 0.167
QT (ms) 375.8 ± 22.5 397 ± 34.5 0.0018
QTcB (ms) 415.7 ± 22.8 435.6 ± 24.0 0.0005
QTcF (ms) 408.9 ± 20.6 429.1 ± 23.1 0.0002
ECG HR (bpm) 74 ± 10.1 74 ± 12.7 0.85
P Axis (degrees) 47.8 ± 17.6 46.8 ± 18.2 0.81
QRS axis (degrees) 60.0 ± 22.7 50.8 ± 25.7 0.105
T axis (degrees) 50.3 ± 17.3 36.0 ± 21.0 0.0018
QRS to T axis angle difference 17.6 ± 13.5 27.6 ± 23.2 0.021

A: Plot QT interval and T axis versus cardiac T2*. Patients with cardiac iron (located left of the vertical line at T2* of 20 ms) exhibit longer QT values. Horizontal line at a QT interval of 407 ms represents the optimal cutoff by ROC analysis. B: QT interval as a function of RR interval for patients with (filled circles) and without (open circles) cardiac iron. Curved lines indicate best power-law fit between QT and RR.

The T wave axes were left shifted in patients having cardiac iron. Figure 2 demonstrates T-wave axis as a function of cardiac T2* the curve is somewhat similar to previously published T2*-LVEF relationships [ 24 ]. ROC analysis yielded an AUROC of 0.72 and optimal cutoff of 43 degrees (horizontal line).

Plot of T axis versus cardiac T2*. MRI detectable cardiac iron was associated with leftward shift of the T-wave axis (less than 43 degrees, horizontal line).

Table III summarizes age and gender corrected Z scores for HR, axis, and intervals between the two groups. No significant gender differences in Z-scores were noted, indicating good correction by the population norms. As a group, TM patients had tachycardia and lengthening of the corrected QT interval, regardless of cardiac iron status, although the magnitude of QT increase was significantly greater in patients having cardiac iron. In contrast, leftward shift of the T-axis was only observed in patients with cardiac iron. Neither QT prolongation, nor left shift of the T-axis was correlated with left ventricular ejection fraction.

ECG Parameter Z-score All TM Patients T2* greater than 20 T2* less than 20 P value
PR 0.1 ± 0.9 0.0 ± 0.7 0.1 ±1.1 0.69
QRS −0.1 ± 0.9 −0.3 ± 1.0 0.0 ± 0.9 0.162
QT 0.3 ± 1.1 0.0 ± 0.8 0.5 ± 1.2* 0.02
QTcB 1.1 ± 1.2* 0.7 ± 1.1* 1.4 ± 1.2* 0.003
QTcF 1.4 ± 1.3* 0.9 ± 1.0* 1.7 ± 1.3* 0.002
HR 0.4 ± 1.0* 0.4 ± 1.0 0.4 ± 1.0* 0.8
QRS 0.1 ± 0.9 0.2 ± 0.8 −0.1 ± 0.9 0.12
T Axis NA 0 ± 1 −0.8 ± 0.9* 0.012
  • Z-Scores reported as (mean ± standard deviation).
  • All patients are separated according to their cardiac iron status. Asterisks demarcate parameters that differ significantly (P < 0.05) from the reference population. P values in the right hand column describe the differences between patients with and without detectable cardiac iron.

Cardiologist ECG interpretations are summarized in Table IV. Cardiologist morphologic assessment was consistent with observed changes in axis and intervals. Although tachycardia and various forms of conduction abnormalities were observed, they were not associated with cardiac iron. The most common abnormalities associated with cardiac iron were non-specific ST-T wave changes (n = 19), prolonged QTc (n = 10), inferior lead T wave inversions (n = 5), and sinus bradycardia (n = 4). LVH was equally distributed in both groups (3/33 versus 5/45). Taking the specific metrics together (simple Boolean “OR” operation) yielded a sensitivity and specificity of 73% and 82%, respectively, for the presence of detectable cardiac iron. Overlap between the qualitative and quantitative assessments of abnormal repolarization was incomplete allowing them to be used together. Combining criteria of QT greater than 407, or T-wave axis less than 43, or abnormal reading (nonspecific ST-T wave changes, prolonged QTc, inferior lead T wave inversions, bradycardia) yielded a sensitivity of 89% and a specificity of 70%. The combined metric yielded a positive predictive value of 80% and a negative predictive value of 70%.

Cardiologist ECG reading T2* greater than 20 T2* less than 20 Specificity (%) Sensitivity (%)
Left atrial hypertrophy 0 1 100 2
Left ventricular hypertrophy 3 5 91 11
Right axis deviation 0 2 100 4
Right ventricular hypertrophy 1 0 97 0
Sinus bradycardia 0 4 100 9
Sinus tachycardia 1 1 97 2
Long QT interval 3 14 91 31
Symmetric T-wave inversions 0 5 100 11
Low voltage QRS 0 2 100 4
Nonspecific ST/T wave changes (NSST) 4 19 88 42
Left anterior hemiblock 1 0 97 0
Right bundle branch block, interventricular conduction delay 2 0 94 0
NSST, QT prolongation, T-wave abnormalities 6 29 82 64
NSST, QT prolongation, T-wave abnormalities, bradycardia 6 33 82 73
NSST, QT prolongation, T-wave abnormalities, bradycardia, QT greater than 404, T axis < 43° 10 40 70 89
  • Sensitivity and specificity for each parameter in predicting detectable cardiac iron (T2* less than 20 ms) are listed in the right two columns. The final two rows of the table list the sensitivity and specificity of various logical parameter combinations.

Univariate logistic regression analysis was performed using cardiac iron overload as the outcome variable. Subsequently, we performed several multivariate stepwise logistic regression analyses to determine a small subset of variables that were most strongly associated with cardiac iron overload. In addition a further subset analysis revealed that the relationships clearly depended on gender, and to simplify the results, all subsequent analyses were stratified by gender. These analyses indicated that QT, QTCb, T-axis, and HR, along with gender, were most strongly associated with cardiac iron overload. To account for the observed (and physiologically predictable) interactions between these variables a regression tree analysis (recursive partitioning) was performed to clarify the interactions between the variables in the model and the presence of cardiac iron. For both males and females these results could be collapsed into 3 subsets of patients with increasing risk of cardiac iron overload. T-axis and heart rate best-stratified risk in females, while QT and heart rate best stratified risk in males. Although there are many statistical models that could be used to represent these relationships, and these results may not generalize exactly to other populations, these results clearly demonstrate in a parsimonious way that iron influences repolarization in a rate-dependent manner. If we consider high and highest risk ECG's as diagnostic for cardiac iron, these partition trees yielded AUROC's of 88.3 and 87.1 for females and males, respectively.


Discussion

Although the Sokolow–Lyon and Cornell criteria are frequently used in clinical practice and appear widely in international guidelines, in the modern era of increasing obesity, their diagnostic accuracy is well below an acceptable level for a diagnostic screening tool. This study has shown that by incorporating BMI into the ECG algorithm by a simple adjustment, the diagnostic sensitivity can be improved without a significant decrease in specificity.

The challenges of ECG screening for LVH in the modern population

In current practice, LVH is most accurately determined by CMR, as its accuracy far exceeds that of either echocardiography or ECG.22 However, the greater availability, simplicity of operation and lower cost associated with the ECG have resulted in its continued worldwide use. However, obesity affects the surface ECG significantly, reducing voltage amplitude through a combination of leftward LV axis deviation, increased chest wall fat and increased pericardial fat. In this study, obesity was observed to reduce the sum of the R wave amplitude in V5 or 6 and the S wave in V1 by up to 8 mm. Indeed, we demonstrate that the sensitivity of the Sokolow–Lyon criteria is only 3.1% in obesity, with a specificity reaching 99.0%. Although the specificity seems excellent, this likely reflects the fact that in obesity the degree of LVH required to generate >35 mm Sokolow–Lyon index is much greater, reducing its diagnostic power (reflected by the Youden index of 0.11). It is quite clear from this study that the Sokolow–Lyon criteria are completely inadequate to be used as a diagnostic screening test in the modern era of obesity. Although the Cornell criteria are seen here to be less vulnerable to increasing BMI, they also have poor diagnostic sensitivity (14·8% and 11·9%, respectively).

Adjusting the ECG for obesity

It has previously been shown that the diagnostic sensitivity of the ECG can be improved by accounting for obesity. However, prior studies have either used 2D echocardiography to determine LV mass, which itself is limited in obesity10 , 13 , 23 or have used complex adjustment equations based on regression, which are unsuited to time-limited modern clinical medicine.24 , 25 This is the first study to use CMR to investigate the effects of both obesity and associated leftward LV axis deviation on ECG LVH criteria. We have shown that being overweight reduces the Sokolow–Lyon voltage by on average 4 mm and obese by 8 mm. When using a correction factor of +4 mm in overweight and +8 mm in obesity, the diagnostic sensitivity of this criteria is increased (by up to 30% in overweight) and to a level that approaches that seen in normal weight. Importantly, although specificity for LVH does decrease after adjustment (by up to 5.5%), it remains excellent (92.9–97.9%).

Given the global utilisation of this criteria as well as the worldwide increase in obesity, this finding is of significant clinical impact and should allow a substantial increase in the detection of anatomical LVH using ECG screening. As ECG-determined LVH appears in both European and US guidelines4 and is known to predict mortality,2 , 26 improving ECG diagnostic performance should quickly translate into significant patient benefit. However, despite these significant improvements, the sensitivity of the ECG remains poor at around 30%. In the current era, this is a level that would preclude the ECG being taken up as a screening tool for LVH if presented as a novel diagnostic test.


The Recent CMS Determination on WATCHMAN: What Can We Expect from Here?

The WATCHMAN (Boston Scientific, Marlborough MA) left atrial appendage (LAA) occlusion device has recently been approved for reimbursement by the Centers for Medicare & Medicaid (CMS). Despite many other contenders, and even an approved general tissue closure device utilized to close the LAA, this is the first specific LAA occlusion device approved by the CMS. It is important to note the differences between the CMS approval and the Food and Drug Administration (FDA) approval in March of 2015. In the FDA approval the device was restricted to those patients who were deemed suitable for long term warfarin, remaining true to the indications for enrollment in the clinical trials. The CMS removes that restriction, and if fact states that the device is an option for those patients deemed not suitable for long term anticoagulation (but could still tolerate short term warfarin).

The WATCHMAN, and other similar devices, was conceived to reduce thromboembolism in patients with atrial fibrillation (AF) and who had an increased risk of bleeding on anticoagulation. The LAA is believed to be the source of thromboembolism in patients with AF. 1 Although mechanistically reasonable, there is not universal agreement that the LAA is the source of all thromboembolism in patients with AF. In fact, there is data that the incidence of left atrial thrombus outside the LAA is 11% in non-valvular AF and increases to > 50% in valvular AF. 2 Thus, LAA occlusion, even if 100% safe and effective would not prevent all thromboembolic events.

Based on a number of randomized control trials, there is strong evidence that systemic anticoagulation reduces thromboembolism. 3 Indeed, for all the novel anticoagulants the comparator is warfarin and not placebo however, anticoagulation does not eliminate the risk. The risk of stroke can be estimated with the CHA2DS2-VASc score, a combination of congestive heart failure, hypertension, age, diabetes, vascular disease, and sex. Yet, many of the risk factors for thromboembolism are the same as those for the risk of a major bleed. For example, one of the most utilized bleeding risk scores is the HAS-BLED, a combination of hypertension, abnormal renal and liver function, stroke, prior bleed, labile international normalized ratio (INR), elderly, and abuse of drugs or alcohol. 4 Thus, it is often the case that individual patients will be at high risk of stroke and major bleeds.

The PROTECTion in Patients With Atrial Fibrillation (PROTECT-AF) trial was the first randomized trial of LAA occlusion devices. In the trial involving 707 patients with a mean CHA2DS2-VASc of 3.2, the WATCHMAN device was successfully implanted and was noninferior to warfarin in the prevention of thromboembolism, but at a significant cost of serious complications. 5 After failure to win FDA approval, and with the FDA mandating a second RCT, the PREVAIL (Prospective Randomized Evaluation of the WATCHMAN LAA Closure Device in Patients With Atrial Fibrillation Versus Long Term Warfarin Therapy) trial began. The PREVAIL trial enrolled 461 patients with a mean CHA2DS2-VASc of 3.8, and demonstrated that decreased rates of complications compared to PROTECT-AF were possible however, there was no particular clinical advantage for the WATCHMAN compared to warfarin. 6 Enrollment in these two trials, by necessity, included the lack of a contraindication to warfarin, as long term warfarin was the comparator arm.

  1. CHA2DS2-VASc of ≥ 3 or CHADS2 ≥ 2.
  2. Formal shared decision utilizing an independent, non-interventional physician whose opinion must be written in the medical record.
  3. Suitability for short-term warfarin, but deemed unable to take long-term anticoagulation.
  4. Procedure must be performed in a hospital with an established structural heart disease or electrophysiology program.
  5. Procedure must be performed by an interventional cardiologist, electrophysiologist or cardiovascular surgeon, who must have received formal training by the manufacturer, have performed ≥ 25 transeptal procedures, and continue to perform ≥ 25 transeptal procedures, including 12 of which are LAA occlusion, over a two year period.
  6. Patient is enrolled, and physicians and hospital participate in a prospective, national, audited registry for at least four years from the time of implantation.

These criteria importantly differ from FDA criteria in that the inclusion criteria include those believed not to be candidates for long-term anticoagulation. However, they are very much more clinically based in that the patients who receive these devices should only be those with an increased risk of bleeding with long-term anticoagulation. Those patients who can be managed with warfarin or one of the direct anticoagulants should not have a WATCHMAN placed, but rather should be anticoagulated. There are real risks of the procedure and we still do not know the efficacy and safety over long-term follow-up. Life-years observed in WATCHMAN patients are only a fraction of those on anticoagulation. Thus, the authors of this Expert Analysis believe that the CMS indication is more sound than that of the FDA, and at this time the authors do not favor implantation of the WATCHMAN as an elective replacement for anticoagulation.

What next? A 2015 American College of Cardiology/Heart Rhythm Society/ Society for Cardiovascular Angiography and Interventions societal overview appropriately recommended ongoing prospective registries of LAA occlusion, particularly with regard to patient selection and outcomes. 7 We are quite pleased with the recent CMS decision on WATCHMAN mandating the enrollment of patients into such a prospective registry. We would argue that such a registry should also be established and mandated for all LAA occlusion devices, including those currently in use and future devices or techniques. Coverage of the WATCHMAN will in all likelihood be expanded to third party payers over time. It would be reasonable for these payers to adopt the CMS criteria, including the mandate to participate in the prospective registry.

Until we obtain more short-term data on decreasing procedural complications and long-term data on efficacy, it would be reasonable to be cautious in patient and physician selection for the WATCHMAN. 8 There are concerns about the learning curve of implantation serious complications (pericardial effusion, stroke, device embolization) were observed in nearly 8% of the initial PROTECT-AF patients, dropping to 3.7% for the patients enrolled in the continued access protocol, and to 2% for the PREVAIL trial. Although thromboembolism was low in both groups in the PREVAIL trial, non-inferiority was not achieved for the WATCHMAN. In other words, patients randomized to warfarin did better.

We are entering a new era in the options for prevention of thromboembolism for patients with AF. Anticoagulation agents have expanded over the last several years, and now include agents which have been demonstrated to be safer and more effective than warfarin in large clinical trials. Whether LAA occlusion devices will follow a similar trajectory to these direct anticoagulants is not clear, and more data is necessary before a widespread expansion is warranted.

  1. Johnson WD, Ganjoo AK, Stone CD, Srivyas RC, Howard M. The left atrial appendage: our most lethal human attachment! Surgical implications. Eur J Cardiothorac Surg 200017:718-22.
  2. Mahajan R, Brooks AG, Sullivan T, et al. Importance of the underlying substrate in determining thrombus location in atrial fibrillation: Implications for left atrial appendage closure. Heart 201298:1120-6.
  3. January CT, Wann LS, Alpert JS, et al. 2014 AHA/ACC/HRS guideline for the management of patients with atrial fibrillation: a report of the American College of Cardiology/American Heart Association Task Force on practice guidelines and the Heart Rhythm Society. Circulation 2014130:2071-104.
  4. Lip GY, Frison L, Halperin JL, Lane DA. Comparative validation of a novel risk score for predicting bleeding risk in anticoagulated patients with atrial fibrillation: The HAS-BLED (Hypertension, Abnormal Renal/Liver Function, Stroke, Bleeding History or Predisposition, Labile INR, Elderly, Drugs/Alcohol Concomitantly) score. J Am Coll Cardiol 201157:173-80.
  5. Reddy VY, Holmes D, Doshi SK, Neuzil P, Kar S. Safety of percutaneous left atrial appendage closure: Results from the Watchman Left Atrial Appendage System for Embolic Protection in Patients with AF (PROTECT AF) clinical trial and the Continued Access Registry. Circulation 2011123:417-24.
  6. Holmes DR, Jr., Kar S, Price MJ, et al. Prospective randomized evaluation of the Watchman Left Atrial Appendage Closure device in patients with atrial fibrillation versus long-term warfarin therapy: The PREVAIL trial. J Am Coll Cardiol 201464:1-12.
  7. Masoudi FA, Calkins H, Kavinsky CJ, et al. 2015 ACC/HRS/SCAI left atrial appendage occlusion device societal overview. Heart Rhythm 201512:e122-36.
  8. Estes NA, 3rd. Left Atrial Appendage Closure for Stroke Prevention in AF: The Quest for the Holy Grail. J Am Coll Cardiol 201566:2740-2.

Keywords: Aged, Alcohols, Angiography, Anticoagulants, Atrial Appendage, Atrial Fibrillation, Device Approval, Diabetes Mellitus, Electrophysiology, Heart Failure, Hypertension, Insurance, Health, Reimbursement, International Normalized Ratio, Liver, Medicaid, Medical Records, Medicare, Patient Selection, Pericardial Effusion, Pharmaceutical Preparations, Prospective Studies, Registries, Risk Factors, Stroke, Surgeons, Thromboembolism, Warfarin, United States Food and Drug Administration


Background

Cardiac event monitors are small portable devices worn by a patient during normal activity for up to 30 days. The device has a recording system capable of storing several minutes of the individual's electrocardiogram (EKG) record. The patient can initiate EKG recording during a symptomatic period of arrhythmia. Cardiac event monitors have primarily been used to diagnose and evaluate cardiac arrhythmias. These monitors are particularly useful in obtaining a record of arrhythmia that would not be discovered on a routine EKG or an arrhythmia that is so infrequent that it is not detected during a 24-hour period by a Holter monitor.

Two different types of cardiac event monitors are available. Pre-symptom (looping memory) event monitors are equipped with electrodes attached to the chest, and are able to capture EKG rhythms before the cardiac event monitor is triggered (pre-symptom recording) (Healthwise, 2003). This feature is especially useful for people who lose consciousness when arrhythmias occur.

Post-symptom event monitors do not have chest electrodes (Healthwise, 2003). One type of post-symptom event monitor is worn on the wrist. When symptoms occur, the patient presses a button to trigger an EKG recording. Another type of post-symptom event monitor is a device that the patient carries within easy reach. When symptoms occur, the patient presses the electrodes on the device against their chest and presses a button to trigger the EKG recording.

Cardiac event monitors have been developed with automatic trigger capabilities, which are designed to automatically trigger an EKG recording when certain arrhythmias occur. Automated trigger cardiac event monitors are thought to be more sensitive, but less specific, than manually-triggered cardiac event monitors for significant cardiac arrhythmias. The simplest automatic trigger cardiac event monitors detect a single type of arrhythmia (e.g., atrial fibrillation), whereas more sophisticated monitors are capable of detecting several types of arrhythmias (e.g., PDSHEART, 2001 Instromedix, 2002 LifeWatch, 2004 Medicomp, 2005 eCardio Diagnostics, 2004). Automatic trigger cardiac event monitors may be especially useful for persons with asymptomatic arrhythmias, persons with syncope, and other persons (children, persons with intellectual disabilities) who can not reliably trigger the monitor when symptoms occur.

Cardiac event monitors may come with 24-hour remote monitoring. Usually, EKG results are transmitted over standard phone lines at the end of each day to an attended monitoring center, where a technician screens EKG results and notifies the patient’s physician of any significant abnormal results, based on predetermined notification criteria. Newer cardiac event monitors allow EKG results to be transmitted via e-mail over the internet (CardioPhonics, 2006). Some cardiac event monitors allow the patient to transmit EKG over standard telephone lines to the attended monitoring center immediately after symptoms occur (e.g., Versweyveld, 2001 Transmedex, 2001) other cardiac event monitors have been adapted to also allow immediate transmission of EKG results by cellular telephone (Philips, 2003 Schiller, 2004 CRY, 2004 HealthFrontier, 2004). If test results suggest a life-threatening emergency, monitoring center personnel may instruct the patient to go to the hospital or call an ambulance (Daja et al, 2001). The development of mobile technology may extend the use of cardiac event monitors from primarily diagnostic purposes to use primarily as an alarm system, to allow rapid intervention for the elderly and others at increased risk of cardiac events (Cox, 2003 Lloyds, 1999).

Standard cardiac event monitors come with 5 to 10 mins of memory. Cardiac event monitors with expanded memory capabilities have been developed, extending memory from approximately 20 to 30 mins (Instromedix, 2002 LifeWatch, 2002 Philips Medical Systems, 2003 PDSHeart, 2006) to as much as several hours (CardioPhonics, 2001 CardioPhonics, 2006). Extended memory is especially useful for automatic trigger cardiac event monitors, because the automatic trigger may not reliably discriminate between clinically significant arrhythmias (true positives) and EKG artifacts (false positives), such that a more limited memory would be filled with false positives.

Mobile cardiovascular telemetry (MCT) refers to non-invasive ambulatory cardiac event monitors with extended memory capable of continuous measurement of heart rate and rhythm over several days, with transmission of results to a remote monitoring center. Mobile cardiovascular telemetry is similar to standard cardiac telemetry used in the hospital setting.

CardioNet (Philadelphia, PA) has developed an MCT device with extended memory, automatic electrocardiogram (ECG) arrhythmia detector and alarm that is incorporated into a service that CardioNet has termed "Mobile Cardiac Outpatient Telemetry (MCOT)." The CardioNet device couples an automatic arrhythmia detector and cellular telephone transmission so that abnormal EKG waveforms can automatically be transmitted immediately to the remote monitoring center. The CardioNet device also has an extended memory characteristic of digital Holter monitors the CardioNet device is capable of storing up to 96 hours of EKG waveforms. These ECG results are transmitted over standard telephone lines to the remote monitoring center at the end of each day. The physician receives both urgent and daily reports.

The manufacturer states that an important advantage of MCOT is that it is capable of detecting asymptomatic events and transmitting them immediately, even when the patient is away from home, allowing timely intervention should a life-threatening arrhythmia may occur. The CardioNet device’s extended memory allows the physician to examine any portion of the ECG waveform over an entire day. This extended memory ensures that it does not fill with EKG artifact (false positives) where the CardioNet’s automated ECG trigger is unable to reliably discriminate between artifact and significant arrhythmias (true positives). Potential uses of MCOT include diagnosis of previously unrecognized arrhythmias, ascertainment of cause of symptoms, and initiation of anti-arrhythmic drug therapy.

The CardioNet ambulatory ECG arrhythmia detector and alarm is cleared for marketing by the Food and Drug Administration (FDA) based on a 510(k) premarket notification due to the FDA’s determination that the CardioNet device was substantially equivalent to devices that were currently on the market. The CardioNet device is not intended for monitoring patients with life-threatening arrhythmias (FDA, 2002).

There is reliable evidence that MCT is superior to patient-activated external loop recorders for diagnosing cardiac arrhythmias. Rothman et al (2007) reported on a randomized controlled clinical study comparing the diagnostic yield of MCT (CardioNet MCOT) to patient-activated external looping event monitors for symptoms thought to be due to an arrhythmia. Subjects with symptoms of syncope, pre-syncope, or severe palpitations who had a non-diagnostic 24-hour Holter monitor were randomized to MCT or an external loop recorder for up to 30 days. The primary endpoint was the confirmation or exclusion of a probable arrhythmic cause of their symptoms. A total of 266 patients who completed the monitoring period were analyzed. A diagnosis was made in 88 % of MCT subjects compared to 75 % of subjects with standard loop recorders (p = 0.008). The authors noted that cardiac arrhythmias without associated symptoms, but nonetheless capable of causing the index symptoms, were the major determining factor accounting for the difference in diagnostic yield of MCT and patient-activated external loop recorders.

There is also evidence to suggest that MCT is superior to auto-triggered external loop recorders for diagnosing symptoms thought to be due to a cardiac arrhythmia. Loop recorders with auto-trigger algorithms have been used to improve the diagnostic yield of event monitors (Strickberger et al, 2006). Rothman et al (2007) explained that their study of MCT was not designed to evaluate auto-triggered loop recorders, as this type of recorder was not available at all study sites. However, 2 of the 17 study sites used looping event recorders with an auto-trigger algorithm in all of their randomized patients (Rothman et al, 2007). A total of 49 subjects, or 16 % of the randomized population were from these 2 sites. In a post-hoc analysis of this subgroup of patients, a diagnosis was made in 88 % of MCT subjects compared to 46 % of patients with auto-triggered external loop recorders. One possible factor accounting for the poor diagnostic yield of the auto-trigger loop recorders employed in this study is that they may have had limited memory which quickly filled with artifact. In addition, the CardioNet MCOT device used in this study uses dual EKG leads, whereas the auto-trigger loop recorders may have used single leads.

One limitation of the study by Rothman et al (2007) was the lack of blinding of the investigators or subjects. The investigators sought to overcome this bias by having all monitoring strips and diagnoses evaluated by another electrophysiologist that was blinded to assignment. Another limitation of this study is that it did not explore the potential for work-up bias the study did not describe whether any of the study subjects had ever had previous work-ups for cardiac arrhythmias that included evaluation with an external loop recorder.

A number of retrospective uncontrolled studies have been published that have described the experience with MCT. Olson et al (2007) retrospectively examined the records of 122 consecutive patients evaluated using MCT for palpitations (n = 76), pre-syncope/syncope (n = 17), or to monitor the effectiveness of anti-arrhythmic therapy (n = 29). The investigators reported on the proportion of patients with syncope/pre-syncope and palpitations whose diagnosis was established by MCT, and the proportion of patients monitored for medication titration who had dosage adjustments. This study is of similar design to an earlier study by Joshi et al (2005), which reported on the first 100 consecutive patients monitored by MCT.

Vasamreddy et al (2006) reported on a small (n = 19) prospective exploratory study examining the feasibility and results of using MCT for monitoring patients with atrial fibrillation before and after catheter ablation for atrial fibrillation. The authors concluded that MCT has potential utility for this use. The authors noted, however, that poor patient compliance with the study's MCT monitoring protocol represented an important limitation only 10 of 19 subjects that were enrolled in the study completed the protocol, which required subjects to wear the MCT monitor 5 days per month for 6 months following the ablation.

Cardiac Telecom Corporation (Greensburg, PA) and Health Monitoring Services of America (Boca Raton, FL) have developed an MCT service called "Telemetry @ Home" that shares many similarities to the CardioNet Service. The Telemetry @ Home Service utilizes Cardiac Telecom’s Heartlink II Monitor, which has automatic arrhythmia detection and extended memory. The Heartlink II Monitor is able to wirelessly transmit abnormal EKG waveforms from a base station in the home to a remote monitoring center. Unlike the CardioNet Service, the Heartlink II Monitor does not have a built-in cellular telephone, so that the monitor does not automatically transmit abnormal waveforms when the patient is away from home out of range of the base station. The Heartlink II Monitor was cleared by the FDA based upon a 510(k) premarket notification.

Biowatch Medical (Columbia, SC) offers an MCT service called "Vital Signs Transmitter (VST)" that shares many similarities to other MCT services. According to the manufacturer, VST provides continuous, real-time, wireless ambulatory patient monitoring of 2 ECG channels plus respiration and temperature (Biowatch Medical. 2008 Gottipaty et al, 2008). The VST is a wireless belt-like device with non-adhesive electrodes that is worn around the patient's chest. The VST has an integrated microprocessor and wireless modem to automatically detect and transmit abnormal ECG waveforms. The monitor transmits ECG data via an integrated cellular telephone, when activated by the patient or by the monitor’s real-time analysis software, to a central monitoring station, where the tracing is analyzed by technicians. The technicians can then notify the patient’s physician of any serious arrhythmias, transmit ECG tracings, and provide patient intervention if required. The monitoring center also provides daily reports that can be accessed by the patient's physician over the Internet. According to the manufacturer, a new VST device is being developed that will also provide data on the patient's oxygen saturation, blood pressure, and weight (Biowatch Medical, 2008). The VST was cleared by the FDA based on a 510(k) premarket notification.

Lifewatch Inc. (Rosemount, IL) has developed an MCT service called LifeStar Ambulatory Cardiac Telemetry (ACT). The LifeStar ACT is similar to the CardioNet MCOT in that it has built-in cellular transmission so that results can be transmitted away from home. The LifeStar ACT cardiac monitoring system utilizes an auto-trigger algorithm to detect atrial fibrillation, tachycardia, bradycardia, and pauses, and requires no patient intervention to capture or transmit an arrhythmia when it occurs. The device can also be manually triggered by the patient during symptoms. Upon arrhythmia detection or manual activation, the LifeStar ACT transmits data via the integrated cellular telephone to LifeWatch, where the ECG is analyzed. The LifeStar ACT has a longer continuous memory loop that can be retrieved as needed by the monitoring center. The LifeWatch ACT was cleared by the FDA based on a 510(k) premarket notification.

A systematic evidence review of remote cardiac monitoring prepared for the Agency for Healthcare Research and Quality by the ECRI Evidence-based Practice Center (AHRQ, 2007) reached the following conclusions about the evidence for MCT: "This study [by Rothman et al, 2007] was a high-quality multicenter study with few limitations. Therefore, the evidence is sufficient to conclude that real-time continuous attended monitoring leads to change in disease management in significantly more patients than do certain ELRs [external loop recorders]. However, because this is a single multicenter study, the strength of evidence supporting this conclusion is weak. Also, the conclusion may not be applicable to ELRs with automatic event activation, as this model was underrepresented in the randomized controlled trial (RCT) [by Rothman et al, 2007] (only 16 % of patients used this model)."

The Zio Patch (iRhythm Technologies, Inc., San Francisco, CA) is a recording device that provides continuous single-lead ECG data for up to 14 days (Mittal et al, 2011). The Zio Patch uses a patch that is placed on the left pectoral region. The patch does not require patient activation. However, a button on the patch can be pressed by the patient to mark a symptomatic episode. At the end of the recording period, the patient mails back the recorder in a prepaid envolope to a central monitoring station(Mittal et al, 2011). A report is provided to the ordering physician within a few days. The manufacturer states that it is indicated for use in patients who may be asymptomatic or who may suffer from transient symptoms (e.g., anxiety, dizziness, fatigue, light-headedness, palpitations, pre-syncope, shortness of breath, and syncope). The Zio ECG Utilization Service (ZEUS) system is a comprehensive system that processes and analyzes received ECG data captured by long-duration, single-lead, continuous recording diagnostic devices (e.g., the Zio Patch and Zio Event Card). However, the clinical outcomes and cost-effectiveness of extended cardiac monitoring by means of the Zito Patch, the ZEUS system and similar devices have not been shown to be superior to other available approaches. Mittal et al (2011) noted that "clinical experience [with the Zio Patch] is currently lacking". The author stated that it is not known how well patients can tolerate the patch for 1 to 2 weeks, and whether the patch can yield a high-quality artifact-free ECG recording through the entire recording period. The authors stated, furthermore, that "the clinical implications of not having access to ECG information within the recording period need to be determined".

Rosenberg et al (2013) compared the Zio Patch, a single-use, non-invasive waterproof long-term continuous monitoring patch, with a 24-hour Holter monitor in 74 consecutive patients with paroxysmal atrial fibrillation (AF) referred for Holter monitoring for detection of arrhythmias. The Zio Patch was well-tolerated, with a mean monitoring period of 10.8 +/- 2.8 days (range of 4 to 14 days). Over a 24-hour period, there was excellent agreement between the Zio Patch and Holter for identifying AF events and estimating AF burden. Although there was no difference in AF burden estimated by the Zio Patch and the Holter monitor, AF events were identified in 18 additional individuals, and the documented pattern of AF (persistent or paroxysmal) changed in 21 patients after Zio Patch monitoring. Other clinically relevant cardiac events recorded on the Zio Patch after the first 24 hours of monitoring, including symptomatic ventricular pauses, prompted referrals for pacemaker placement or changes in medications. As a result of the findings from the Zio Patch, 28.4 % of patients had a change in their clinical management. The authors concluded that the Zio Patch was well-tolerated, and allowed significantly longer continuous monitoring than a Holter, resulting in an improvement in clinical accuracy, the detection of potentially malignant arrhythmias, and a meaningful change in clinical management. Moreover, they stated that further studies are necessary to examine the long-term impact of the use of the Zio Patch in AF management.

Turakhia and colleagues (2013) noted that although extending the duration of ambulatory electrocardiographic monitoring beyond 24 to 48 hours can improve the detection of arrhythmias, lead-based (Holter) monitors might be limited by patient compliance and other factors. These researchers, therefore, evaluated compliance, analyzable signal time, interval to arrhythmia detection, and diagnostic yield of the Zio Patch, a novel leadless, electrocardiographic monitoring device in 26,751 consecutive patients. The mean wear time was 7.6 ± 3.6 days, and the median analyzable time was 99 % of the total wear time. Among the patients with detected arrhythmias (60.3 % of all patients), 29.9 % had their first arrhythmia and 51.1 % had their first symptom-triggered arrhythmia occur after the initial 48-hour period. Compared with the first 48 hours of monitoring, the overall diagnostic yield was greater when data from the entire Zio Patch wear duration were included for any arrhythmia (62.2 % versus 43.9 %, p < 0.0001) and for any symptomatic arrhythmia (9.7 % versus 4.4 %, p < 0.0001). For paroxysmal atrial fibrillation (AF), the mean interval to the first detection of AF was inversely proportional to the total AF burden, with an increasing proportion occurring after 48 hours (11.2 %, 10.5 %, 20.8 %, and 38.0 % for an AF burden of 51 % to 75 %, 26 % to 50 %, 1 % to 25 %, and less than 1 %, respectively). The authors concluded that extended monitoring with the Zio Patch for less than or equal to 14 days is feasible, with high patient compliance, a high analyzable signal time, and an incremental diagnostic yield beyond 48 hours for all arrhythmia types. These findings could have significant implications for device selection, monitoring duration, and care pathways for arrhythmia evaluation and AF surveillance.

Higgins (2013) stated that a number of substantial improvements to the 60-year old concept of the Holter monitor have recently been developed. One promising advance is the Zio Patch (iRhythm Technologies, Inc., CA), a small 2 × 5-inch patch, which can continuously record up to 14 days of a single ECG channel of cardiac rhythm without the need for removal during exercise, sleeping or bathing. Its ease-of-use, which enables optimal long-term monitoring, has been established in the ambulatory setting, although some insurance carriers have been reluctant to reimburse appropriately for this advance, an issue characteristic of other heart monitors, treated as 'loss-leaders'. In this article, in addition to discussing possible reasons for this reluctance, a novel model for direct-to-consumer marketing of heart monitoring, outside of the traditional health insurance reimbursement model, is also presented. Additional current and future advances in heart rhythm recording are also discussed. Such potentially revolutionary opportunities have only recently become possible as a result of technologic advances.

The Center for Medicare and Medicaid Services (CMS) (2004) has determined that an ambulatory cardiac monitoring device or service is eligible for Medicare coverage only if it can be placed into the following categories:

Patient/Event Activated Intermittent Recorders

Pre-symptom memory loop (insertable or non-insertable)

Post-symptom (no memory loop)

Non-activated Continuous Recorders

The CMS has determined that an ambulatory cardiac monitoring device or service is not covered if it does not fit into these categories. The CMS noted that it may create new ambulatory electrocardiographic monitoring device categories "if published, peer-reviewed clinical studies demonstrate evidence of improved clinical utility, or equal utility with additional advantage to the patient, as indicated by improved patient management and/or improved health outcomes in the Medicare population (such as superior ability to detect serious or life-threatening arrhythmias) as compared to devices or services in the currently described categories".

Hanke et al (2009) noted that 24-hr Holter monitoring (24HM) is commonly used to assess cardiac rhythm after surgical therapy of atrial fibrillation (AF). However, this "snapshot" documentation leaves a considerable diagnostic window and only stores short-time cardiac rhythm episodes. To improve accuracy of rhythm surveillance after surgical ablation therapy and to compare continuous heart rhythm surveillance versus 24HM follow-up intra-individually, these investigators evaluated a novel implantable continuous cardiac rhythm monitoring (IMD) device (Reveal XT 9525, Medtronic Inc., Minneapolis, MN). A total of 45 cardiac surgical patients (male 37, mean age of 69.7+/-9.2 years) with a mean pre-operative AF duration of 38 +/- 45 m were treated with either left atrial epicardial high-intensity focus ultrasound ablation (n = 33) or endocardial cryothermy (n = 12) in case of concomitant mitral valve surgery. Rhythm control readings were derived simultaneously from 24HM and IMD at 3-month intervals with a total recording of 2,021 hours for 24HM and 220,766 hours for IMD. Mean follow-up was 8.30 +/- 3.97 m (range of 0 to 12 m). Mean post-operative AF burden (time period spent in AF) as indicated by IMD was 37 +/- 43 %. Sinus rhythm was documented in 53 readings of 24HM, but in only 34 of these instances by the IMD in the time period before 24HM readings (64 %, p < 0.0001), reflecting a 24HM sensitivity of 0.60 and a negative-predictive value (NPV) of 0.64 for detecting AF recurrence. The authors concluded that for "real-life" cardiac rhythm documentation, continuous heart rhythm surveillance instead of any conventional 24HM follow-up strategy is necessary. This is particularly important for further judgment of ablation techniques, devices as well as anti-coagulation and anti-arrhythmic therapy.

Hindricks et al (2010) quantified the performance of the first implantable leadless cardiac monitor (ICM) with dedicated AF detection capabilities. Patients (n = 247) with an implanted ICM who were likely to present with paroxysmal AF were selected. A special Holter device stored 46 hours of subcutaneously recorded ECG, ICM markers, and 2 surface ECG leads. The ICM automatic arrhythmia classification was compared with the core laboratory classification of the surface ECG. Of the 206 analyzable Holter recordings collected, 76 (37 %) contained at least 1 episode of core laboratory classified AF. The sensitivity, specificity, positive-predictive value, and NPV for identifying patients with any AF were 96.1 %, 85.4 %, 79.3 %, and 97.4 %, respectively. The AF burden measured with the ICM was very well-correlated with the reference value derived from the Holter (Pearson coefficient = 0.97). The overall accuracy of the ICM for detecting AF was 98.5 %. The authors concluded that in this ICM validation study, the dedicated AF detection algorithm reliably detected the presence or absence of AF and the AF burden was accurately quantified.

Ip et al (2012) examined the outcomes of surgical ablation and post-ablation AF surveillance with a leadless ICM. A total of 45 patients with drug-refractory paroxysmal or persistent AF underwent video-assisted epicardial ablation using a bipolar radiofrequency clamp. An ICM was implanted subcutaneously post-ablation to assess AF recurrence. AF recurrence was defined as greater than or equal to 1 AF episode with a duration of greater than or equal to 30 s. The device-stored data were down-loaded weekly over the internet, and all transmitted events were reviewed. A total of 1,220 AF automatic and patient-activated AF episodes were analyzed over a follow-up of 12 +/- 3 months. Of these episodes, 46 % were asymptomatic. Furthermore, only 66 % of the patient-activated episodes were AF. Recurrence of AF was highest in first 4 weeks and substantially decreased 6 months post-ablation. The overall freedom from AF recurrence at the end of follow-up was 60 %. When 48-hr Holter recordings were compared with the device-stored episodes, the sensitivity of the device to detect AF was 98 %, and the specificity was 71 %. The authors concluded that ICM provides an objective measure of AF ablation success and may be useful in making clinical decisions.

The AliveCor Heart Monitor (AliveCor, Inc., San Francisco, CA) is an iPhone-enabled heart monitor that has been known as the "iPhoneECG". It is in a thin case with 2 electrodes that snaps onto the back of an iPhone 4 or 5. To obtain an electrocardiogram (ECG) recording, the patient just holds the device while pressing fingers from each hand onto the electrodes. The device can also obtain an ECG from the patient's chest. The AliveCor ECG iPhone application can record rhythm strips of any duration to be stored on the phone and uploaded securely for later analysis, sharing, or printing through AliveCor's website. The AliveCor Heart Monitor will operate for about 100 hours on a 3.0 V coin cell battery.

However, there is currently a lack of evidence to support the clinical value of the AliveCor Heart Monitor. Prospective, randomized controlled studies are needed to ascertain how the use of the AliveCor Heart Monitor would improve clinical outcomes in patients with cardiovascular diseases/disorders.

According to the company, research studies are currently in progress to explore effectiveness of the AliveCor Heart Monitor in the following areas:

  • Expanding physician assistant/registered nurse data collection abilities
  • Long-term atrial fibrillation remote monitoring
  • Medication-induced QT-duration response monitoring
  • Multi-specialty care integration
  • Post-ablation follow-up
  • Preventive pediatric care
  • Stress induced rhythm morphology changes

The implantable loop recorder (ILR) is a subcutaneous, single-lead, ECG monitoring device used for diagnosis in patients with recurrent unexplained episodes of palpitations or syncope. The 2009 ESC syncope guidelines include the following recommendations for use of ILRs:

  • ILR is indicated for early phase evaluation in patients with recurrent syncope of uncertain origin, absence of high-risk criteria (see appendix), and a high likelihood of recurrence within the battery life of the device.
  • An ILR is recommended in patients who have high-risk features (see appendix) in whom a comprehensive evaluation did not demonstrate a cause of syncope or lead to a specific treatment.
  • An ILR should be considered to assess the contribution of bradycardia before embarking on cardiac pacing in patients with suspected or certain reflex syncope with frequent or traumatic syncopal episodes.

Ziegler et al (2012) stated that the detection of undiagnosed atrial tachycardia/atrial fibrillation (AT/AF) among patients with stroke risk factors could be useful for primary stroke prevention. These researchers analyzed newly detected AT/AF (NDAF) using continuous monitoring in patients with stroke risk factors but without previous stroke or evidence of AT/AF. Newly detected AT/AF (AT/AF greater than 5 mins on any day) was determined in patients with implantable cardiac rhythm devices and greater than or equal to 1 stroke risk factors (congestive heart failure, hypertension, age greater than or equal to 75 years, or diabetes). All devices were capable of continuously monitoring the daily cumulative time in AT/AF. Of 1,368 eligible patients, NDAF was identified in 416 (30%) during a follow-up of 1.1 ± 0.7 years and was unrelated to the CHADS2 score (clinical prediction rules for estimating the risk of stroke in patients with non-rheumatic AF) (congestive heart failure, hypertension [blood pressure consistently greater than 140/90 mm Hg or hypertension treated with medication], age greater than or equal to 75 years, diabetes mellitus, previous stroke or transient ischemic attack). The presence of AT/AF greater than 6 hours on greater than or equal to 1 day increased significantly with increased CHADS2 scores and was present in 158 (54 %) of 294 patients with NDAF and a CHADS2 score of greater than or equal to 2. Newly detected AT/AF was sporadic, and 78 % of patients with a CHADS2 score of greater than or equal to 2 with NDAF experienced AT/AF on less than 10 % of the follow-up days. The median interval to NDAF detection in these higher risk patients was 72 days (interquartile range: 13 to 177). The authors concluded that continuous monitoring identified NDAF in 30 % of patients with stroke risk factors. In patients with NDAF, AT/AF occurred sporadically, high-lighting the difficulty in detecting paroxysmal AT/AF using traditional monitoring methods. However, AT/AF also persisted for greater than 6 hours on greater than or equal to 1 day in most patients with NDAF and multiple stroke risk factors. Whether patients with CHADS2 risk factors but without a history of AF might benefit from implantable monitors for the selection and administration of anti-coagulation for primary stroke prevention merits additional investigation.

Cotter et al (2013) examined the usefulness of ILR with improved AF detection capability (Reveal XT) and the factors associated with AF in the setting of unexplained stroke. A cohort study was reported of 51 patients in whom ILRs were implanted for the investigation of ischemic stroke for which no cause had been found (cryptogenic) following appropriate vascular and cardiac imaging and at least 24 hours of cardiac rhythm monitoring. Age of patients ranged from 17 to 73 (median of 52) years. Of the 30 patients with a shunt investigation, 22 had a patent foramen ovale (73.3 % 95 % CI: 56.5 % to 90.1 %). Atrial fibrillation was identified in 13 (25.5 % 95 % confidence intervals [CI]: 13.1 % to 37.9 %) cases. Atrial fibrillation was associated with increasing age (p = 0.018), inter-atrial conduction block (p = 0.02), left atrial volume (p = 0.025), and the occurrence of atrial premature contractions on preceding external monitoring (p = 0.004). The median (range) of monitoring prior to AF detection was 48 (0 to 154) days. The authors concluded that in patients with unexplained stroke, AF was detected by ILR in 25.5 %. Predictors of AF were identified, which may help to target investigations. They stated that ILRs may have a central role in the future in the investigation of patients with unexplained stroke.

  1. in 11 (55 %) patients, stored ECGs confirmed AF at 62 ± 38 days after ablation
  2. in 4 (20 %) patients, although the ILR suggested AF, episodes actually represented sinus rhythm with frequent premature atrial contractions and/or over-sensing
  3. in 5 (25 %) patients, no AF was observed. Episodes less than 4 hours were associated with low AF burden (less than 1 %) or false detections.

The 1-year freedom from any episode of AF greater than 4 and greater than 12 hours was 52 % and 83 %, respectively. The authors concluded that these findings showed that many (but not all) patients develop new AF within the first 4 months of flutter ablation. Since external ECG monitoring for this duration is impractical, the ILR has an important role for long-term AF surveillance. They stated that future research should be directed toward identifying the relationship between duration/burden of AF and stroke and improving existing ILR technology.

An UpToDate review on "Cryptogenic stroke" (Prabhakaran and Elkind, 2013) states that "Paroxysmal atrial fibrillation (AF), if transient, infrequent, and largely asymptomatic, may be undetected on standard cardiac monitoring such as continuous telemetry and 24 or 48-hour Holter monitors. In a study that assessed longer-term monitoring using an outpatient telemetry system for a median duration of 21 days among 56 patients with cryptogenic stroke, paroxysmal AF was detected in 13 patients (23 %). The median time to detection of AF was 7 days. The majority of patients with paroxysmal AF were asymptomatic during the fleeting episodes. Other reports have noted that the detection rate of paroxysmal AF can be increased with longer duration of cardiac monitoring, and that precursors of AF such as frequent premature atrial contractions may predict those harboring paroxysmal AF. The optimal monitoring method – continuous telemetry, ambulatory electrocardiography, serial electrocardiography, transtelephonic ECG monitoring, or implantable loop recorders – is uncertain, though longer durations of monitoring are likely to obtain the highest diagnostic yield".

Sanna et al (2014) conducted a randomized, controlled study of 441 patients (CRYSTAL AF trial) to assess whether long-term monitoring with an insertable cardiac monitor (ICM) is more effective than conventional follow-up (control) for detecting atrial fibrillation in patients with cryptogenic stroke. Patients 40 years of age or older with no evidence of atrial fibrillation during at least 24 hours of ECG monitoring underwent randomization within 90 days after the index event. The primary end-point was the time to first detection of atrial fibrillation (lasting greater than 30 seconds) within 6 months. Among the secondary end-points was the time to first detection of atrial fibrillation within 12 months. Data were analyzed according to the intention-to-treat principle. By 6 months, atrial fibrillation had been detected in 8.9 % of patients in the ICM group (19 patients) versus 1.4 % of patients in the control group (3 patients) (hazard ratio [HR], 6.4 95 % confidence interval [CI]: 1.9 to 21.7 p < 0.001). By 12 months, atrial fibrillation had been detected in 12.4 % of patients in the ICM group (29 patients) versus 2.0 % of patients in the control group (4 patients) (HR, 7.3 95 % CI, 2.6 to 20.8 p < 0.001). The authors concluded that ECG monitoring with an ICM was superior to conventional follow-up for detecting atrial fibrillation after cryptogenic stroke.

In the EMBRACE trial, Gladstone et al (2014) randomly assigned 572 patients 55 years of age or older, without known atrial fibrillation, who had had a cryptogenic ischemic stroke or TIA within the previous 6 months (cause undetermined after standard tests, including 24-hour electrocardiography [ECG]), to undergo additional noninvasive ambulatory ECG monitoring with either a 30-day event-triggered recorder (intervention group) or a conventional 24-hour monitor (control group). The primary outcome was newly detected atrial fibrillation lasting 30 seconds or longer within 90 days after randomization. Secondary outcomes included episodes of atrial fibrillation lasting 2.5 minutes or longer and anticoagulation status at 90 days. Atrial fibrillation lasting 30 seconds or longer was detected in 45 of 280 patients (16.1 %) in the intervention group, as compared with 9 of 277 (3.2 %) in the control group (absolute difference, 12.9 percentage points 95 % [CI: 8.0 to 17.6 p < 0.001 number needed to screen, 8). Atrial fibrillation lasting 2.5 minutes or longer was present in 28 of 284 patients (9.9 %) in the intervention group, as compared with 7 of 277 (2.5 %) in the control group (absolute difference, 7.4 percentage points 95 % CI: 3.4 to 11.3 p < 0.001). By 90 days, oral anti-coagulant therapy had been prescribed for more patients in the intervention group than in the control group (52 of 280 patients [18.6 %] versus 31 of 279 [11.1 %] absolute difference, 7.5 percentage points 95 % CI: 1.6 to 13.3 p = 0.01). The investigators concluded that, among patients with a recent cryptogenic stroke or TIA who were 55 years of age or older, paroxysmal atrial fibrillation was common. Non-invasive ambulatory ECG monitoring for a target of 30 days significantly improved the detection of atrial fibrillation by a factor of more than 5 and nearly doubled the rate of anti-coagulant treatment, as compared with the standard practice of short-duration ECG monitoring.

An accompanying editorial stated that at least 2 relevant questions remain unanswered (Kamel, 2014). "First, subclinical atrial fibrillation is clearly not the whole answer to the riddle of cryptogenic stroke. Even after long-term follow-up involving 3 years of continuous rhythm monitoring in the CRYSTAL AF trial, less than one third of the patients had evidence of atrial fibrillation. We need to identify additional sources of embolism and better markers of known stroke mechanisms such as nonobstructive atherosclerosis. Second, we need more evidence to guide therapy for subclinical atrial fibrillation. Randomized trials of antithrombotic therapy have involved patients with a sufficient burden of atrial fibrillation to allow its recognition without prolonged rhythm monitoring. Whether the proven benefit of anticoagulation in this population extends to patients with subclinical atrial fibrillation must be answered in future trials."

The editorialist (Kamel, 2014) continued: "In the meantime, how should the results of the CRYSTAL AF and EMBRACE trials change practice? The weight of current evidence suggests that subclinical atrial fibrillation is a modifiable risk factor for stroke recurrence, and its presence should be thoroughly ruled out in this high-risk population. Therefore, most patients with cryptogenic stroke or transient ischemic attack should undergo at least several weeks of rhythm monitoring. Relatively inexpensive external loop recorders, such as those used in the EMBRACE trial, will probably be cost-effective the value of more expensive implantable loop recorders is less clear. Furthermore, the detection of subclinical atrial fibrillation in these patients should generally prompt a switch from antiplatelet to anticoagulant therapy. At the least, patients should be followed closely in order to detect progression to clinically apparent atrial fibrillation, in which case the evidence unambiguously supports anticoagulant therapy for the secondary prevention of stroke."

The BIOTRONIK BioMonitor (BIOTRONIK Home Monitoring) is an implantable cardiac monitor. It differs from other implantables as it does not have leads going to the heart. The BioMonitor is suggested to continuously record ECG data when an arrhythmia occurs. An external magnet can also be positioned over the implanted device to record ECG data when symptoms are experienced.

The mobile patient management system is a monitoring device designed for detection of cardiac arrhythmias. These devices differ from other ECG devices as they may also monitor activity, body fluid status, body temperature posture and respiratory rate. An example of such a device is the BodyGuardian Remote Monitoring System.

The ViSi Mobile Monitoring System is intended for single or multi-parameter vital sign monitoring of adults. It measures ECG (three or five leads), heart rate, respiration rate, noninvasive blood pressure, noninvasive monitoring of oxygen saturation (SpO2), pulse rate and skin temperature.

Self-monitoring ECG technologies, which may be obtained without physician prescription include, but are not limited to, software applications for smartphones and other electronic devices suggested to monitor ECG, heart rate, oxygen saturation, respiratory rate, etc. In addition, there are devices (wireless or non-wireless) such as the Alive Heart and Activity Monitor (Alive Technologies), a wireless health monitoring system, purported to monitor ECG, heart rate and other non-cardiac related indications. These devices may be attached to a finger, ear lobe or other body part.

Biotronik BioMonitor

Ciconte and colleagues (2017) noted that continuous rhythm monitoring is valuable for adequate AF management in the clinical setting. Subcutaneous leadless ICMs yield an improved AF detection, overcoming the intrinsic limitations of the currently available external recording systems, thus resulting in a more accurate patient treatment. These investigators evaluated the detection performance of a novel 3-vector ICM device equipped with a dedicated AF algorithm. A total of 66 patients (86.4 % males mean age of 60.4 ± 9.4 years) at risk to present AF episodes, having undergone the novel ICM implant (BioMonitor, Biotronik SE&Co. KG, Berlin, Germany), were enrolled. External 48-hour ECG Holter was performed 4 weeks after the device implantation. The automatic ICM AF classification was compared with the manual Holter arrhythmia recordings. Of the overall study population, 63/66 (95.5 %) had analyzable Holter data, 39/63 (62 %) showed at least 1 true AF episode. All these patients had at least 1 AF episode stored in the ICM. On Holter monitoring, 24/63 (38 %) patients did not show AF episodes, in 16 of them (16/24, 67 %), the ICM confirmed the absence of AF. The AF detection sensitivity and positive predictive value (PPV) for episodes' analysis were 95.4 and 76.3 %, respectively. The authors concluded that continuous monitoring using this novel device, equipped with a dedicated detection algorithm, yielded an accurate and reliable detection of AF episodes. They stated that the ICM is a promising tool for tailoring individual AF patient management further long-term prospective studies are needed to confirm these encouraging results.

The AliveCor Heart Monitor (iPhoneECG)

Chung and Guise (2015) evaluated the feasibility of AliveCor tracings for QTC assessment in patients receiving dofetilide. A total of 5 patients with persistent AF underwent the 2-handed measurement (mimicks Lead I). On the ECG, Lead I or II was used. There was no significant difference between the AliveCor-QTC and ECG-QTC (all ± 20 msec). The authors concluded that the AliveCor device can be used to monitor the QTC in these patients. This was a small (n = 5) feasibility study the clinical role of the AliveCor heart monitor has yet to be established.

Baquero et al (2015) stated that the AliveCor ECG is an FDA-approved ambulatory cardiac rhythm monitor that records a single channel (lead I) ECG rhythm strip using an iPhone. In the past few years, the use of smartphones and tablets with health related applications has significantly proliferated. In this initial feasibility trial, these researchers attempted to reproduce the 12-lead ECG using the bipolar arrangement of the AliveCor monitor coupled to smart phone technology. They used the AliveCor heart monitor coupled with an iPhone cellular phone and the AliveECG application (APP) in 5 individuals. In these 5 individuals, recordings from both a standard 12-lead ECG and the AliveCor generated 12 lead ECG had the same interpretation. The authors concluded that the findings of this study demonstrated the feasibility of creating a 12-lead ECG with a smart phone. They stated that the validity of the recordings would seem to suggest that this technology could become an important useful tool for clinical use this new hand-held smartphone 12-lead ECG recorder needs further development and validation.

In a pilot study, Muhlestein et al (2015) attempted to gain experience with smartphone ECG prior to designing a larger multi-center study evaluating standard 12-lead ECG compared to smartphone ECG. A total of 6 patients for whom the hospital STEMI protocol was activated were evaluated with traditional 12-lead ECG followed immediately by a smartphone ECG using right (VnR) and left (VnL) limb leads for precordial grounding. The AliveCor Heart Monitor was utilized for this study. All tracings were taken prior to catheterization or immediately after re-vascularization while still in the catheterization laboratory. The smartphone ECG had excellent correlation with the gold standard 12-lead ECG in all patients 4 out of 6 tracings were judged to meet STEMI criteria on both modalities as determined by 3 experienced cardiologists, and in the remaining 2, consensus indicated a non-STEMI ECG diagnosis. No significant difference was noted between VnR and VnL. The authors concluded that smartphone-based ECG is a promising, developing technology intended to increase availability and speed of electrocardiographic evaluation. This study confirmed the potential of a smartphone ECG for evaluation of acute ischemia and the feasibility of studying this technology further to define the diagnostic accuracy, limitations and appropriate use of this new technology.

Peritz et al (2015) noted that rapidly detecting dangerous arrhythmias in a symptomatic athlete continues to be an elusive goal. The use of hand-held smartphone ECG monitors could represent a helpful tool connecting the athletic trainer to the cardiologist. A total of 6 college athletes presented to their athletic trainers complaining of palpitations during exercise were included in this analysis. A single-lead ECG was performed using the AliveCor Heart Monitor and sent wirelessly to the Team Cardiologist who confirmed an absence of dangerous arrhythmia. The authors concluded that the AliveCor monitoring has the potential to enhance evaluation of symptomatic athletes by allowing trainers and team physicians to make diagnosis in real-time and facilitate faster return to play.

Chan and associates (2016) stated that diagnosing AF before ischemic stroke occurs is a priority for stroke prevention in AF. Smartphone camera-based photo-plethysmographic (PPG) pulse waveform measurement discriminates between different heart rhythms, but its ability to diagnose AF in real-world situations has not been adequately investigated. These researchers evaluated the diagnostic performance of a stand-alone smartphone PPG application, Cardiio Rhythm, for AF screening in primary care setting. Patients with hypertension, with diabetes mellitus, and/or aged greater than or equal to 65 years were recruited. A single-lead ECG was recorded by using the AliveCor heart monitor with tracings reviewed subsequently by 2 cardiologists to provide the reference standard PPG measurements were performed by using the Cardiio Rhythm smartphone application AF was diagnosed in 28 (2.76 %) of 1,013 participants. The diagnostic sensitivity of the Cardiio Rhythm for AF detection was 92.9 % (95 % CI: 77 to 99 %) and was higher than that of the AliveCor automated algorithm (71.4 % [95 % CI: 51 to 87 %]). The specificities of Cardiio Rhythm and the AliveCor automated algorithm were comparable (97.7 % [95 % CI: 97 to 99 %] versus 99.4 % [95 % CI: 99 to 100 %]). The PPV of the Cardiio Rhythm was lower than that of the AliveCor automated algorithm (53.1 % [95 % CI: 38 to 67 %] versus 76.9 % [95 % CI: 56 to 91 %]) both had a very high negative predictive value (NPV) (99.8 % [95 % CI: 99 to 100 %] versus 99.2 % [95 % CI: 98 to 100 %]). The authors concluded that the Cardiio Rhythm smartphone PPG application provided an accurate and reliable means to detect AF in patients at risk of developing AF and has the potential to enable population-based screening for AF.

Desteghe and colleagues (2017) determined the usability, accuracy, and cost-effectiveness of 2 hand-held single-lead ECG devices for AF screening in a hospital population with an increased risk for AF. Hospitalized patients (n = 445) at cardiological or geriatric wards were screened for AF by 2 hand-held ECG devices (MyDiagnostick and AliveCor). The performance of the automated algorithm of each device was evaluated against a full 12-lead or 6-lead ECG recording. All ECGs and monitor tracings were also independently reviewed in a blinded fashion by 2 electrophysiologists. Time investments by nurses and physicians were tracked and used to estimate cost-effectiveness of different screening strategies. Hand-held recordings were not possible in 7 and 21.4 % of cardiology and geriatric patients, respectively, because they were not able to hold the devices properly. Even after the exclusion of patients with an implanted device, sensitivity and specificity of the automated algorithms were sub-optimal (Cardiology: 81.8 and 94.2 %, respectively, for MyDiagnostick 54.5 and 97.5 %, respectively, for AliveCor Geriatrics: 89.5 and 95.7 %, respectively, for MyDiagnostick 78.9 and 97.9 %, respectively, for AliveCor). A scenario based on automated AliveCor evaluation in patients without AF history and without an implanted device proved to be the most cost-effective method, with a provider cost to identify 1 new AF patient of €193 and €82 at cardiology and geriatrics, respectively. The cost to detect 1 preventable stroke per year would be €7535 and €1916, respectively (based on average CHA2DS2-VASc of 3.9 ± 2.0 and 5.0 ± 1.5, respectively). Manual interpretation increases sensitivity, but decreases specificity, doubling the cost per detected patient, but remains cheaper than sole 12-lead ECG screening. The authors concluded that using AliveCor or MyDiagnostick hand-held recorders requires a structured screening strategy to be effective and cost-effective in a hospital setting. It must exclude patients with implanted devices and known AF, and requires targeted additional 12-lead ECGs to optimize specificity. They noted that under these circumstances, the expenses per diagnosed new AF patient and preventable stroke are reasonable.

Chan and colleagues (2017) noted that 2 new devices have been introduced that use automated algorithms to diagnose AF. One FDA-approved device uses algorithms on a smartphone and dry electrodes that plug into the phone to detect AF (AliveCor Heart Monitor), while the other is in use in Europe and uses an algorithm integrated with an automatic blood pressure device (Microlife WatchBP Office AFIB). In this study from Hong Kong, a total of 2,052 patients with a mean age of 68 years were evaluated by both devices. If either diagnosed AF, a 12-lead ECG was performed. Two cardiologists examined the single-lead ECG generated by the AliveCor device to determine the reference standard cardiac rhythm. They then calculated the sensitivity and specificity of the devices. Poor sensitivity could lead to missed diagnoses, whereas poor specificity would lead to the need for unnecessary ECGs to confirm the diagnosis. The AliveCor device detected 16 of 24 patients with a final diagnosis of AF (67 % sensitivity 95 % CI: 45 % to 84%) compared with 20 of 24 for the MicroLife device (83 % sensitivity 95 % CI: 63 % to 95 %). The AliveCor device had 11 false-positive results (99.5 % specificity) compared with 27 for the Microlife device (98.7 % specificity).

Lown and co-workers (2017) stated that AF is a cause of stroke and a marker of atherosclerosis and of all patients with stroke, around 17 % have AF. The screening and treatment of AF could prevent about 12 % of all strokes. Several relatively low-cost devices with good accuracy now exist which can detect AF including WatchBP and AliveCor. However, they can only measure the ECG or pulse over short time periods. Inexpensive devices such as heart rate monitors, which are widely available, can measure heart rate for prolonged periods and may have potential in screening for AF. In a pilot study, these researchers determined the accuracy of AliveCor and WatchBP along with a bespoke algorithm using a heart rate monitor belt (Polar H7) and a wearable RR interval recorder (Firstbeat Bodyguard 2) for detecting AF during a single screening visit in primary care patients. This is a multi-center case-control diagnostic study comparing the 4 different devices for the detection of AF with a reference standard consisting of a 12-lead ECG in GP surgeries across Hampshire, UK. These investigators aim to recruit 92 participants with AF and 329 without AF aged 65 years and over. They will ask participants to rate comfort and overall impression for each device and will collect qualitative data from participants capturing their experience of using wearable devices in order to evaluate acceptability. These researchers will collect data from general practitioners to determine their views on AF screening. The authors concluded that this protocol was approved by the London-City & East Research Ethics Committee in June 2016. The findings of the trial will be disseminated through peer-reviewed journals, national and international conference presentations and the Atrial Fibrillation Association, UK. Moreover, these investigators stated that based on the results, they will design a larger clinical trial to examine prolonged AF screening in the community using inexpensive consumer devices.

Tu and colleagues (2017) stated that paroxysmal AF is a common and preventable cause of devastating strokes. However, currently available monitoring methods, including Holter monitoring, cardiac telemetry and event loop recorders, have drawbacks that restrict their application in the general stroke population. AliveCor heart monitor, a novel device that embeds miniaturized ECG in a smartphone case coupled with an application to record and diagnose the ECG, has recently been shown to provide an accurate and sensitive single-lead ECG diagnosis of AF. This device could be used by nurses to record a 30-s ECG instead of manual pulse taking and automatically provide a diagnosis of AF. These researchers plan to compare the proportion of patients with paroxysmal AF detected by AliveCor ECG monitoring with current standard practice. Consecutive ischemic stroke and transient ischemic attack patients presenting to participating stroke units without known AF will undergo intermittent AliveCor ECG monitoring administered by nursing staff at the same frequency as the vital observations of pulse and blood pressure until discharge, in addition to the standard testing paradigm of each participating stroke unit to detect paroxysmal AF. This study will enroll 296 subjects primary outcome will be proportion of patients with paroxysmal AF detected by AliveCor ECG monitoring compared to 12-lead ECG, 24-h Holter monitoring and cardiac telemetry. The authors concluded that the use of AliveCor heart monitor as part of routine stroke unit nursing observation has the potential to be an inexpensive non-invasive method to increase paroxysmal AF detection, leading to improvement in stroke secondary prevention.

CardioPatch

Marcelli and colleagues (2017) described the conceptual design and the first prototype implementation of the Multi-Sense CardioPatch, a wearable multi-sensor patch for remote heart monitoring aimed at providing a more detailed and comprehensive heart status diagnostics. The system integrates multiple sensors in a single patch for detection of both electrical (ECG) and mechanical (Heart Sounds, HS) cardiac activity, in addition to physical activity (PA). The prototypal system also comprises a microcontroller board with a radio communication unit and it is powered by a Li-Ion rechargeable battery. Results from preliminary evaluations on healthy subjects have shown that the prototype can successfully measure electro-mechanical cardiac activity, providing useful cardiac indexes. The authors concluded that the system has potential to improve remote monitoring of cardiac function in chronically diseased patients undergoing home-based cardiac rehabilitation programs.

IHEART/Kardia Mobile

Kardia Mobile is the next generation of trans-telephonic ECG event recorders.

Hickey et al (2016) stated that AF is a major public health problem and is the most common cardiac arrhythmia, affecting an estimated 2.7 million Americans. The true prevalence of AF is likely under-estimated because episodes are often sporadic therefore, it is challenging to detect and record an occurrence in a "real world" setting. To-date, mobile health tools that promote earlier detection and treatment of AF and improvement in self-management behaviors and knowledge have not been evaluated. This study will be the first to address the problem of AF with a novel approach utilizing advancements in mobile health ECG technology to empower patients to actively engage in their healthcare and to evaluate impact on quality of life and quality-adjusted life years. Furthermore, sending a daily ECG transmission, coupled with receiving educational and motivational text messages aimed at promoting self-management and a healthy lifestyle may improve the management of chronic cardiovascular conditions (e.g., diabetes, heart failure, and hypertension, etc.). These researchers are currently conducting a prospective, single-center RCT to evaluate the effectiveness of a mobile health intervention, iPhone® Helping Evaluate Atrial fibrillation Rhythm through Technology (iHEART) versus usual cardiac care. A total of 300 participants with a recent history of AF will be enrolled. Participants will be randomized 1:1 to receive the iHEART intervention, receiving an iPhone® equipped with an AliveCor® Mobile ECG and accompanying Kardia application and behavioral altering motivational text messages or usual cardiac care for 6 months. The authors stated that this will be the first study to investigate the utility of a mobile health intervention in a "real world" setting. They will evaluate the ability of the iHEART intervention to improve the detection and treatment of recurrent AF and assess the intervention's impact on improving clinical outcomes, quality of life, quality-adjusted life-years and disease-specific knowledge.

Halcox and associates (2017) conducted a randomized controlled trial (RCT) of AF screening using an AliveCor Kardia monitor attached to a WiFi-enabled iPod to obtain ECGs (iECGs) in ambulatory patients. Patients greater than or equal to 65 years of age with a CHADS-VASc score greater than or equal to 2 free from AF were randomized to the iECG arm or routine care (RC). iECG participants acquired iECGs twice-weekly over 12 months (plus additional iECGs if symptomatic) onto a secure study server with over-read by an automated AF detection algorithm and by a cardiac physiologist and/or consultant cardiologist. Time to diagnosis of AF was the primary outcome measure. The overall cost of the devices, ECG interpretation, and patient management were captured and used to generate the cost per AF diagnosis in iECG patients. Clinical events and patient attitudes/experience were also evaluated. These researchers studied 1,001 patients (500 iECG, 501 RC) who were 72.6 ± 5.4 years of age 534 were women. Mean CHADS-VASc score was 3.0 (heart failure, 1.4 % hypertension, 54 % diabetes mellitus, 30 % prior stroke/transient ischemic attack, 6.5 % arterial disease, 15.9 % all CHADS-VASc risk factors were evenly distributed between groups). A total of 19 patients in the iECG group were diagnosed with AF over the 12-month study period versus 5 in the RC arm (HR, 3.9 95 % CI: 1.4 to 10.4 p = 0.007) at a cost per AF diagnosis of $10,780 (£8255). There was a similar number of stroke/transient ischemic attack/systemic embolic events (6 versus 10, iECG versus RC HR = 0.61 95 % CI: 0.22 to 1.69 p = 0.34). The majority of iECG patients were satisfied with the device, finding it easy to use without restricting activities or causing anxiety. The authors concluded that screening with twice-weekly single-lead iECG with remote interpretation in ambulatory patients greater than or equal to 65 years of age at increased risk of stroke was significantly more likely to identify incident AF than RC over a 12-month period. They stated that this approach is also highly acceptable to this group of patients, supporting further evaluation in an appropriately powered, event-driven clinical trial.

Narasimha and colleagues (2018) noted that ambulatory cardiac monitoring devices such as ELRs are often used in the out-patient clinic to evaluate palpitations. However, ELRs can be bulky and uncomfortable to use, especially in public, at work, or in social situations. An alternative approach is a smartphone-based ECG recorder/event recorder (Kardia Mobile [KM]), but the comparative diagnostic yield of each approach has not been studied. In this study, a total of 33 patients with palpitations wore an ELR and carried a KM for a period of 14 to 30 days. They were instructed to transmit ECGs via KM and also to activate the ELR whenever they had symptoms. The tracings obtained from both devices were independently analyzed by 2 cardiologists, and the overall arrhythmia yield, as well as patient preference and compliance, were evaluated. The paired binomial data obtained from both devices were compared using an unconditional test of non-inferiority. Of the 38 patients enrolled in the study, more patients had a potential diagnosis for their symptoms (i.e., at least 1 symptomatic recording during the entire monitoring period) with KM than with the ELR (KM = 34 [89.5 %] versus ELR = 26 [68.4 %] χ2 = 5.1, p = 0.024). In the per protocol analysis, all 33 patients (100 %) had a potential diagnosis using the KM device, which was significantly higher compared to 24 patients (72.2 %) using the ELR (χ2 = 10.4, p = 0.001). The authors concluded that KM was non-inferior to an ELR for detecting arrhythmias in the out-patient setting. They stated that the ease of use and portability of this device made it an attractive option for the detection of symptomatic arrhythmias. This was a small study (n = 33) with a short study duration (14 to 30 days). These findings need to be validated by well-designed studies.

Pacemaker Event Recorders for Detection of Ventricular Arrhythmias

Sampaio and colleagues (2018) noted that although new pacemakers can register cardiac rhythm, few studies were performed evaluating their accuracy in diagnosing ventricular arrhythmias (VA). These investigators examined the correlation and agreement between the pacemaker's monitor and the ambulatory Holter in detecting VA. They studied 129 patients with pacemakers, mean age of 68.6 ± 19.1 years, 54.8 % women. Once Holter monitoring was connected, the pacemakers' event counters were reset and clocks of both systems were synchronized to register ECG simultaneously. Pacemakers were programmed to detect the lowest ventricular rate and lowest number of sequential beats allowed in their event monitors. After 72 hours, Holter and pacemakers records were analyzed VA was defined in Holter and event monitor respectively as: isolated premature ventricular complexes: "PVC" pairs: "couplets" non-sustained ventricular tachycardia (NSVT): "triplets"- 3 beats "runs"- 4 to 8 or greater than 8 beats, and "HVR"- 3 to 4 beats. Spearman correlations evaluated whether pacemaker and Holter identified the same parameters. Intra-class correlation coefficients (ICCs) and respective 95 % CIs were calculated to assess the concordance between methods. The agreement between both systems was low, except for "triplet" and 3 beats NSVT (ICC = 0.984). The correlation for more than 10 PVC/hour was moderate (kappa = 0.483). When the pacemaker was programmed to detect HVR sequences of 3 beats lower than 140 bpm (less than 140/3), the correlation with NSVT was perfect (r = 1) and agreement was also quite high (ICC = 0.800). The authors concluded that pacemaker's event monitors under-estimated the occurrence of VAs detected by Holter. They stated that standardization of pacemakers' algorithms is needed before using this function for patient's clinical follow-up.


Discussion

The COmPLETE study with its design will allow for the investigation and characterisation of the physical fitness components of endurance capacity, muscle strength, and neuromuscular coordination in individuals without chronic diseases from the 20th to the 100th year of life as well as in patients with heart failure. Therefore, this study will construct a novel dataset with normal values for all major physical fitness markers in healthy individuals.

The additional comprehensive assessment of vascular biomarkers in these individuals offers the opportunity to discriminate within apparently healthy individuals. Separately, it allows for the investigation of the mechanisms of aging and the role of physical fitness components and physical activity on vascular markers in various vasculature beds in health and heart failure.

Furthermore, the COmPLETE study may elucidate new approaches in diagnosis with its combined and extensive assessment of physical fitness and vascular biomarkers. This will enable us to find the most suitable diagnostic markers for CV risk and heart failure.

According to the inverse association of several vascular biomarkers with physical fitness components (endurance capacity, muscle strength, and neuromuscular coordination), individuals with excellent vascular health markers may have even better physical fitness than those with attenuated vascular health within the C-Health sample. The age-matched comparison with patients with different stages of heart failure may provide an estimate of the health distance of different fitness parameters to healthy individuals.

Health distance provides a new complex measure of aging-related decline in the adaptive capacity of the organism by comparing the values of the physiological or biological “norms” (C-Health) with those with prevalent heart failure (our example) [128].

The COmPLETE study shall provide a better understanding of which functional characteristics should be specifically targeted in primary and secondary prevention to achieve an optimal healthspan.