WASHINGTON, Feb. 21 (UPI) -- People with high blood pressure, or hypertension as it is known in the medical profession, were generally startled a few months back to learn that a large study had found that simple diuretics were more effective than the new medication, ACE inhibitors, in controlling blood pressure. Yet they will have learned recently that another study found the reverse to be true. The confusion is surely enough to raise anyone's blood pressure. What is going on here?
There are actually several answers to this simple question. The first is that the two studies were not exactly comparable, so it is perfectly possible for them to come up with conflicting recommendations. But the second answer, perhaps more important, is that neither study was quite as definitive as the news reports made them out to be.
The first study, from the Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT to its friends), was a truly massive undertaking following 42,000 patients in the United States and Canada. It was designed to compare the use of diuretics to the use of three other treatments designed to lower blood pressure. It found that patients using diuretics, especially blacks, had lower blood pressure readings than those using the other medications, and suffered less of a risk of stroke, heart failure, angina or heart surgery.
The second study, the Second Australian National Blood Pressure Study (ANBP2), published in the Feb. 13 issue of the New England Journal of Medicine involved 6,083 patients in Australia and looked at the effects on patients who used diuretics compared to those who used ACE inhibitors. Both medications were different from those used in the ALLHAT study. This study again found lower blood pressures among the diuretic group, but found that 695 of the 3,044 patients using ACE inhibitors suffered heart problems or strokes, as opposed to 736 of the 3,029 patients using diuretics. The study therefore concluded that diuretics posed more of a risk to the patient.
Yet this elevated risk is minimal. The numbers amount to 24 percent of diuretic patients suffering problems compared to 23 percent of those receiving ACE inhibitors. The 1 percent difference is not really enough to say anything with certainty about which treatment is safer, even though the findings for male parents just achieved statistical significance. On the other hand, the benefits in blood pressure readings were also minimal -- normally around one or two points, although they seemed to be larger for blacks in ALLHAT (the Australian study contained fewer blacks in its study groups).
In truth, as an accompanying editorial in NEJM argued, there seems to be little difference between the two medications when it comes to the treatment of hypertension. If it is high blood pressure alone that is the issue, the diuretic should probably win the day as being significantly cheaper. If there are some other issues involved, then an ACE inhibitor might seem appropriate in addition, or, for example in the case of a patient with diabetes, as the initial medication. Beta-blockers and calcium channel blockers may also be appropriate medications depending on the patient's circumstances.
As so often, what we should learn most from such conflicting studies is not that our doctors have got everything wrong, or fallen victim to drug company hype, as The New York Times seemed to suggest in its coverage of the ALLHAT study. Instead, we should realize that medical decisions very often depend more on the individual circumstances of the patient than on the general effectiveness of one form of medication or another. In that respect, these studies are more significant to the medical profession than they are to the general public.
It is often said that you can prove anything with statistics, because the manipulation of raw data using statistical techniques can often provide startlingly different conclusions from the same basic information. A case in point was a recent medical study published in The Lancet, which looked at the effects on heart disease of adding ibuprofen treatment to aspirin treatment among cardiovascular patients. By calculating a "hazard ratio," the researchers concluded that those taking both medicines were twice as likely to die, from all causes, as those taking aspirin alone.
However, a look at the raw data provides a slightly different picture. The aspirin-only group suffered 1,983 deaths from a group of 6,285 patients. The combined group suffered 62 deaths from 187 patients. Both these figures work out at about a third of the group dying. How, then, did the researchers find a doubled risk? That was because hazard ratios are dependent on mortality rates, in other words how quickly the patients died, and there does seem to be a higher mortality rate in the double-treatment group. The researchers controlled for a large number of possibly confounding factors, so they may well have found something here, but it is still slightly misleading to characterize it as a doubled chance of death among those receiving both medicines. What they found was a faster rate of death, which is something quite different.
I am grateful to the Internet's Medpundit (http://medpundit.blogspot.com) for drawing my attention to this confusing study.
Finally, an interesting article in Britain's Independent newspaper quoted an eminent cardiologist as pointing out the difference between British and continental European approaches to heart disease: "We make an objective assessment of the risk based on measures including cholesterol level, blood pressure and smoking while doctors from most other countries do it by the seat of their pants."
Britain leads Europe in deaths from heart disease.
(This column examines the facts behind recent statistical studies that have made the news but been misinterpreted, failed to make the news for some reason or are just plain weird.)