Primary Myelodysplastic Syndromes: The Mayo Clinic Experience With 1000 Patients [Communiqué]
Myelodysplastic syndromes (MDS) are malignant hematopoietic stem cell disorders categorized under chronic myeloid malignancies according to the World Health Organization (WHO) 2008 classiﬁcation. Myelodysplastic syndromes may be further subcategorized as primary (de novo) or secondary arising from previous chemotherapy, radiation therapy, or antecedent myeloid malignancies. Myelodysplastic syndromes may include a heterogeneous group of disorders that are characterized by dysplastic and ineffective blood cell production leading to peripheral blood cytopenias despite a hypercellular bone marrow, likely as a result of increased apoptosis in the marrow. The pathophysiology of the disease remains largely elusive. The only exception is MDS with del(5q), in which haploinsufﬁciency of the ribosomal gene RPS14 (for expansion of gene symbols, see www.genenames.org), which is required for the maturation of 40S ribosomal subunits and maps to the commonly deleted region, and homozygous inactivation of the casein kinase 1A1 gene (CSNK1A1) play a central role in disease biology. Subsequently, in recent years, the discovery of several recurrent genetic abnormalities involving signal transduction (NRAS, KRAS, and CBL), transcription regulation (RUNX1), epigenetic regulation (ASXL1, DNMT3A, TET2, EZH2, and IDH1/2), spliceosome machinery (SF3B1, SRSF2, U2AF, and ZRSR2), and DNA repair (TP53) have provided insight into the clinical heterogeneity of these disorders. For instance, mutations in the spliceosome component SF3B1 correlate with the presence of ringed sideroblasts. Of note, mutations involving these genes are not speciﬁc to MDS and occur at a variable frequency in other myeloid malignancies.
Most patients with MDS are elderly, with a median age of 70 years, and typically present with complications associated with peripheral blood cytopenias. The clinical heterogeneity of MDS with an extremely variable risk of disease transformation to acute myeloid leukemia explains the variation observed in survival, ranging from only a few months to almost a decade. Hence, treatment options vary from watchful waiting and supportive care to disease-modifying therapy and allogeneic bone marrow transplant. The latter is the only potentially curative treatment option that is limited to patients younger than 70 years and comes with a potential cost of signiﬁcant treatment-related morbidity and mortality. Given the variable clinical course, an accurate prognostic assessment becomes critical. To that end, over the years, several prognostic scoring systems have been developed, starting with the International Prognostic Scoring System (IPSS) in 1997. This was followed by the World Health Organization Prognostic Scoring System (WPSS) in 2007, the global MD Anderson score (MDAS) in 2008, and the most recent Revised International Prognostic Scoring System (IPSS-R) in 2012. Each of these scoring systems uses readily available clinical and morphological variables, such as marrow blast percentage (IPSS, IPSS-R, and MDAS), karyotype (IPSS, IPSS-R, WPSS, and MDAS), number (IPSS) or degree of cytopenias (IPSS-R and MDAS), age (MDAS), WHO morphological classiﬁcation (WPSS), transfusion dependence (WPSS and MDAS), and performance status (MDAS). With the advent of genome sequencing and identiﬁcation of recurrent mutations of prognostic relevance, it is reasonable to expect the incorporation of molecular variables in future prognostic scoring systems.
In the present study, we share our decades worth of experience with 1000 consecutive patients with primary MDS evaluated at Mayo Clinic with the following main objectives: (1) to provide a comprehensive description of clinical, laboratory, and morphological characteristics at diagnosis and (2) to validate prognostic factors predictive of survival and evolution to acute leukemia followed by (3) the application of currently available prognostic scoring systems and (4) comparison of clinical characteristics and survival of patients diagnosed before and after 2005.
Patients and Methods
After approval by the institutional review board, we retrospectively recruited 1000 consecutive patients with primary MDS who were untreated at the time of referral to Mayo Clinic in Rochester, Minnesota, during the time period January 1989 to May 2014. A thorough review of medical records was conducted to ensure the inclusion of patients with primary MDS only. The diagnosis of MDS and leukemic transformation (LT) was made according to the WHO criteria.
The following MDS morphological categories were considered: refractory anemia, refractory anemia with ringed sideroblasts, refractory cytopenia with multilineage dysplasia (RCMD), RCMD with ringed sideroblasts, refractory anemia with excess blasts-1 (RAEB-1), refractory anemia with excess blasts-2, MDS with isolated del(5q), and MDS unclassiﬁed. Patients with LT (>20% myeloblasts) at the time of evaluation and those with chronic myelomonocytic leukemia were excluded from the study. All morphological and cytogenetic assessments had to be either performed or reviewed at our institution for study inclusion. The bone marrow slides were not rereviewed for the purpose of this study. Classiﬁcation and any pertinent morphology ﬁndings were based on the original bone marrow pathology report. Cytogenetic results were interpreted and reported according to the International System for Human Cytogenetic Nomenclature. The deﬁnition of red cell transfusion dependency included patients presenting with symptomatic anemia that necessitated red cell transfusion and those with a history of red cell transfusions. However, only those patients with an ongoing need for red cell transfusions were considered transfusion dependent, and we excluded those who might have had isolated instances of transfusions in the past. Furthermore, the actual number of transfusions was not considered in labeling a patient red cell transfusion dependent or not.
We ensured that complete follow-up information, including status of disease, treatment details, and cause of death, was updated in January 2015 by means of a telephone call to either patients or their local hematologist.
Continuous variables are summarized as median (range). Categorical variables are summarized as frequency (percentage). The medians of continuous variables were compared using the Mann-Whitney U test when there were 2 groups and the Kruskall-Wallis test when there were more than 2 groups. Categorical variables were compared using the Fisher exact test or the chi-square test when there were 2 groups and the generalized Fisher-Freeman-Halton test when there were more than 2 groups. The Kaplan-Meier method was used to construct cumulative time-to-event curves, and the curves were compared using a log-rank test. Survival was deﬁned from the date of diagnosis to the date of death or last follow-up. Leukemic transformation was deﬁned from the date of diagnosis to the date of LT or last follow-up. Cox proportional hazards regression was used for multivariable analyses of the end points of time to death and time to LT using age, sex, anemia categories, thrombocytopenia categories, bone marrow blast percentage, cytogenetic categories, red cell transfusion dependence, European Cooperative Oncology Group performance status, and IPSS, IPSS-R, WPSS, and MD Anderson scores. P values less than .05 were considered statistically signiﬁcant. The Stat View statistical package (SAS Institute) was used for all analyses.
A total of 1000 consecutive patients with primary MDS were considered, including 531 of 1000 (53%) patients diagnosed before 2005 and 469 of 1000 (47%) patients diagnosed after 2005. The year 2005 was chosen as the cutoff year for analysis, to coincide with the approval of the hypomethylating agent azacitidine in 2004 followed by the immunomodulatory agent lenalidomide for the treatment of MDS. The presenting clinical and laboratory characteristics of all 1000 patients including subsets of patients diagnosed before or after 2005 are summarized in Table 1. Information on computation of the scores using 4 common prognostic scoring systems (IPSS, WPSS, MDAS, and IPSS-R) was available in all patients. Patients diagnosed before 2005 were more likely to present with anemia deﬁned as a hemoglobin level of less than 10 g/dL (to convert g/dL values to g/L, multiply by 10.0) and displayed a higher incidence of refractory anemia (25 of 531 [5%]), refractory anemia with ringed sideroblasts (94 of 531 [18%]), and MDS unclassiﬁed (106 of 531 [20%]) than did patients diagnosed after 2005, who had a higher incidence of RCMD (88 of 469 [40%]) and RAEB-1 and RAEB-2 (164 of 469 [35%]) (P<.001). As a result, the latter group of patients had higher WPSS scores and a higher incidence of IPSS poor risk cytogenetics and IPSS-R very poor risk cytogenetics than did the former group. However, the IPSS, IPSS-R, and MDAS risk distribution was similar between the 2 cohorts. As expected, approximately half (226 of 469 [48%]) the patients diagnosed after 2005 received disease-modifying therapy, with 47 of 469 (10%) patients having undergone an allogeneic transplant, as compared with only 72 of 531(14%) patients diagnosed before 2005 received disease-modifying therapy, with 18 of 531(3%) having undergone an allogeneic transplant (P<.001).
Abnormal cytogenetics occurred in approximately half (510 of 1000) the patients, with detailed clinical correlates of abnormal karyotype by sole, 2, and complex abnormalities provided in Table 2. Most (327 of 510 [64%]) were sole abnormalities, with the most frequent sole abnormalities being del(5q) (86 of 327 [26%]), trisomy 8 (49 of 327 [15%]), del(20q) (46 of 327 [14%]), -Y (38 of 327 [12%]), monosomy 7 (14 of 327 [4%]), del(11q) (11 of 327 [3%]), trisomy 21 (7 of 327 [2%]), del(13q) (6 of 327 [2%]), and del(7q) (5 of 327 [2%]). Complex karyotype deﬁned as 3 or more abnormalities was noted in 114 of 1000 (11%) patients. Monosomal karyotype (MK) deﬁned by the presence of 2 autosomal monosomies or a single autosomal monosomy in association with another structural abnormality occurred in 101 of 1000 (10%) patients. Patients with an abnormal karyotype were less likely to be older than 60 years, had lower platelet counts, had higher IPSS, IPSS-R, WPSS, and MD Anderson scores, and had a higher incidence of red cell transfusion need than did patients with a normal karyotype. Furthermore, patients with a complex karyotype had a lower platelet count with higher bone marrow blast percentage and IPSS, IPSS-R, WPSS, and MD Anderson scores than did patients with sole or 2 abnormalities in addition to a lower absolute neutrophil count vs patients with sole abnormalities. Therefore, patients with a complex karyotype were more likely to have received disease-modifying therapy.
The median overall survival for patients with an abnormal karyotype was signiﬁcantly shorter than that for patients with a normal karyotype (22 months vs 40 months, respectively; P<.001). Time to LT was not signiﬁcantly different between patients with an abnormal karyotype and patients with a normal karyotype (P=.36). Furthermore, among patients with an abnormal karyotype, the overall survival of patients with sole abnormalities vs patients with 2 abnormalities vs patients with complex karyotype was signiﬁcantly different (30 months vs 22 months vs 7 months, respectively; P<.001). The median time to LT was not reached for patients with sole and 2 abnormalities, but the median time to LT was 82 months for patients with a complex karyotype (P<.001).
The results of multivariable analyses of overall survival are presented in Table 3. For the IPSS, age as a continuous variable, male sex, bone marrow blast percentage categories, French American British (FAB) classiﬁcation RCMD, FAB classiﬁcation RAEB-1, and poor cytogenetic category were independent predictors of survival, with hazard ratio (HR) (95% CI) being 1.0 (1.0-1.1), 1.3 (1.1-1.5), 1.7 (1.0-2.7), 1.3 (1.1-1.5), 1.9 (1.4-2.6), and 3.3 (2.2-5.0), respectively. The results of multivariable analyses of overall survival for the IPSS-R, WPSS, and MDAS are presented in Table 3. In addition, the results of multivariable analyses of LT are presented in Table 4. For the IPSS, bone marrow blast percentage categories, FAB classiﬁcation RCMD, FAB classiﬁcation RAEB-1, and poor cytogenetic category were independent predictors of LT, with HR (95% CI) being 5.2 (1.9-14.7), 2.1 (1.1-4.2), 3.9 (1.7-8.9), and 3.7 (1.5-9.1), respectively. The results of multivariable analyses of LT for the IPSS-R, WPSS, and MDAS are presented in Table 4.
The results of the Kaplan-Meier survival analysis of IPSS-R cytogenetic categories are presented in Figure 1, A. Patients with a good risk karyotype (normal, del(5q), del(20q), and del(12p)) exhibited superior median survival than did patients with a very good risk karyotype (-Y and del(11q)) (41 months vs 31 months, respectively; P=.05). Using Cox proportional hazards regression with age above 60 years as a covariate, the HR (95% CI) for patients with a good risk karyotype vs patients with a very good risk karyotype was found to be 0.73 (0.53-1.0) (P=.05).
In addition, the median survival of patients with a very good risk karyotype vs patients with an intermediate risk karyotype vs patients with a poor risk karyotype was similar (31 months vs 24 months vs 20 months, respectively; P=.78) (Figure 1, A). Using Cox proportional hazards regression, the HR (95% CI) for patients with a very good risk karyotype vs patients with an intermediate risk karyotype was found to be 1.1 (0.79-1.61) (P=.49), and the HR (95% CI) for patients with a very good risk karyotype vs patients with a poor risk karyotype was found to be 1.1 (0.69-1.73) (P=.70). Furthermore, patients with an IPSS-R good risk karyotype continued to exhibit superior median survival in comparison to patients with a very good risk karyotype (39 months vs 27 months, respectively; P=.009) when analysis was restricted to 702 patients treated with supportive care alone (Figure 1, B). Using Cox proportional hazards regression, the HR (95% CI) for patients with a good risk karyotype vs patients with a very good risk karyotype was found to be 0.61 (0.42-0.89) (P=.01). In addition, the median survival of patients with a very good risk karyotype vs patients with an intermediate risk karyotype was similar (27 months vs 21 months, respectively) (Figure 1, B). Using Cox proportional hazards regression, the HR (95% CI) for patients with a very good risk karyotype vs patients with an intermediate risk karyotype was found to be 1.02 (0.67-1.53) (P=.94).
In patients with an IPSS-R poor risk karyotype and patients with a very poor risk karyotype when MK was considered separately, the median survival of patients with an IPSS-R poor risk karyotype vs patients with a very poor risk karyotype vs patients with MK was 33 months vs 12 months vs 6.5 months, respectively (P<.001) (Figure 1, C). Using Cox proportional hazards regression, the HR (95% CI) for patients with a poor risk karyotype vs patients with a very poor risk karyotype was found to be 0.72 (0.36-1.45) (P=.36) whereas the HR (95% CI) for patients with MK vs patients with a very poor risk karyotype was found to be 1.94 (1.06-3.58) (P=.03).
Application of contemporary prognostic models
Subsequently, we applied the IPSS, IPSS-R, WPSS, and MDAS scoring systems to our cohort and found that each of these systems was able to effectively discriminate between the different prognostic subgroups, except for the IPSS-R, which was unable to effectively discriminate between intermediate and high-risk groups (Figure 2, A-D). The IPSS risk distribution was as follows: 293 of 1000 (29%) low risk, 466 of 1000 (47%) intermediate-1, 189 of 1000 (19%) intermediate-2, and 52 of 1000 (5%) high risk, with the corresponding median survival of 66, 29, 13.5, and 8 months, respectively (P<.001). The IPSS-R risk distribution was as follows: 168 of 1000 (17%) very low, 359 of 1000 (36%) low, 207 of 1000 (21%) intermediate, 152 of 1000 (15%) high, and 114 of 1000 (11%) very high risk, with the corresponding median survival of 72, 43, 24, 18, and 7 months, respectively (P<.001). Further subgroup analysis revealed that patients with an IPSS-R intermediate and high-risk distribution had a similar median survival of 24 and 18 months, respectively (P=.22). The WPSS risk distribution was as follows: 134 of 1000 (13%) very low, 294 of 1000 (29%) low, 229 of 1000 (23%) intermediate, 264 of 1000 (26%) high, and 79 of 1000 (8%) very high risk, with the corresponding median survival of 81, 42, 28, 18, and 6 months, respectively (P<.001). The MDAS risk distribution was as follows: 288 of 1000 (29%) low, 424 of 1000 (42%) intermediate-1, 172 of 1000 (17%) intermediate-2, and 116 of 1000 (12%) high risk, with the corresponding median survival of 73, 33, 16.5, and 7.5 months, respectively (P<.001). Similar results were obtained when our analysis was restricted to 702 of 1000 (70%) patients who received supportive care alone, which included transfusions and erythropoiesis-stimulating agents (Supplemental Figure 1, A-D, available online at http://www.mayoclinicproceedings.org).
Myelodysplastic syndrome is typically a disease of the elderly with a median age of our cohort being 72 years, but a small proportion of patients (131 of 1000 [13%]) were younger than 60 years. We analyzed the survival outcomes of these patients younger than 60 years separately and found a median survival of more than 9 years for the IPSS low risk category. Similar results were obtained with the WPSS and MDAS scoring systems. However, the IPSS-R system failed to discriminate between the risk categories when applied to patients younger than 60 years (Supplemental Figure 2, A-D, available online at http://www.mayoclinicproceedings.org).
Outcome in patients diagnosed before vs after 2005
The year 2004 marked the advent of new treatments with the approval of the hypomethylating agent azacitidine for all subtypes of MDS followed by the approval of the immunomodulatory agent lenalidomide for MDS with del(5q) in 2005. Therefore, we hypothesized a potential improvement in survival over the years, especially in patients diagnosed after 2005. The median survival of our entire cohort was 30 months. Intriguingly, we found a similar poor outcome in patients diagnosed before or after 2005, with a median survival of 32 and 28 months, respectively (P=.46) (Figure 3, A). Furthermore, outcome remained poor in patients categorized in the IPSS-R high and very high risk groups despite the introduction and increased use of disease-modifying therapies (Figure 3, B). Of note, the difference in median survival of 20 months in patients in the IPSS-R high-risk group diagnosed after 2005 and 14 months in patients diagnosed before 2005 did not reach statistical signiﬁcance (P=.30). Similar results were obtained when Cox analysis was performed (diagnosis before 2005: HR, 0.95; 95% CI, 0.82-1.1; P=.47) and when IPSS-R risk stratiﬁcation was applied (diagnosis before 2005: HR, 1.002; 95% CI, 0.87-1.16; P=.98). The median leukemia-free survival was not reached for patients diagnosed before or after 2005 (P=.14).
This is the largest series of patients with primary MDS to be evaluated at a single institution, reinforcing the uniformity of both diagnostic and therapeutic interventions. The original IPSS cohort comprised 816 patients from 7 institutions. Subsequently, the IPSS was recently revised (IPSS-R) in 2012 by recruiting 7012 patients with MDS from multiple international institutions, of which 5504 cases were classiﬁed according to the WHO criteria. In contrast, the WPSS and MDAS cohorts were evaluated at a single institution, including 426 patients in the learning cohort and 793 patients in the validation cohort for the WPSS and 507 patients with primary MDS out of a total of 1915 patients in the MDAS cohort.
Herein, we describe presenting clinical and laboratory characteristics and survival outcomes of 1000 consecutive patients with primary MDS evaluated and followed at our institution over 25 years. The salient features of our cohort are that each case was classiﬁed according to the WHO 2008 criteria and we excluded patients with secondary MDS arising from previous chemotherapy, radiation therapy, or antecedent myeloid malignancies. In addition, we excluded patients with oligoblastic acute myeloid leukemia and chronic myelomonocytic leukemia as compared with the IPSS-R cohort. We conﬁrm that MDS occurs in elderly patients, with 852 of 1000 (85%) being older than 60 years. There is a male preponderance, accounting for more than two-thirds of patients (695 of 1000). Approximately half (556 of 1000) the patients with MDS present with anemia deﬁned as a hemoglobin level of less than 10 g/dL, as well as 476 of 1000 with thrombocytopenia with a platelet count of less than 100x109/L and a quarter (257 of 1000) with neutropenia with an absolute neutrophil count of less than 800/L, which is consistent with previous studies.
One-third (328 of 1000) of patients were red cell transfusion dependent at diagnosis, as previously reported in the WPSS cohort. The most common WHO subtype at diagnosis was RCMD, which was seen in 314 of 1000 (31%) patients. One-third (298 of 1000) of our patients received disease-modifying therapy including allogeneic transplant (65 of 1000 [7%]).
Abnormal cytogenetics at diagnosis occurred in approximately half (510 of 1000) the patients, with the majority (327 of 510 [64%]) being sole abnormalities. The IPSS-R cytogenetic categories were not effective in discriminating between the 5 prognostic groups despite restricting our analysis to patients having received supportive care alone as in the IPSS-R cohort. This is consistent with our previous cytogenetic study including 783 patients with primary MDS. The IPSS-R good risk cytogenetics category (del(5q), del(20q), and del(12p)) exhibited the best prognosis in our cohort. The reason for this discrepancy is not entirely clear. We also highlight the poor prognostic effect of MK in MDS, which is consistent with previous observations by our group and several others.
We were able to successfully validate each of the 4 commonly used prognostic scoring systems (IPSS, IPSS-R, WPSS, and MDAS) in our cohort, showing an effective discrimination between the different risk groups, with median survival ranging from more than 5 years for low-risk patients to less than 6 months for high-risk patients. The only exception was the IPSS-R intermediate and high-risk groups with similar survival of 24 and 18 months, respectively. Although the IPSS and IPSS-R are applicable to untreated patients evaluated at the time of diagnosis, the WPSS and MDAS are dynamic scoring systems, with the latter including treated patients as well. Of note, scores of each of these systems were computed using parameters obtained at the time of diagnosis for our entire cohort. In the future, we expect that the recognition of molecular markers of prognostic relevance will serve to further reﬁne risk stratiﬁcation, particularly in our low-risk patients. For instance, a recent study by Bejar et al that incorporated mutational status found that the presence of the EZH2 mutation was able to identify 29% of lower-risk patients with MDS with a worse prognosis than that originally predicted by the low-risk MDAS model.
A major strength of our study is the relatively long and mature median follow-up period of 27 months, during which time 808 of 1000 (81%) deaths and 129 of 1000 (13%) LTs were documented. This is further enhanced by an even distribution of patients diagnosed before and after 2005. This enabled us to provide an accurate comparison of survival patterns between the 2 groups. Notably, 488 of 531 (90%) patients diagnosed before 2005 have died during a median follow-up of 31 months as compared with 320 of 469 (68%) patients diagnosed after 2005 who have died during a median follow-up of 24 months (P<.001). However, the LT rate was similar in patients diagnosed before and after 2005 (68 of 531 [13%] vs 47 of 469 [10%], respectively, P=.92). The survival of our patients diagnosed after 2005 remained poor despite approximately half (226 of 469) of these patients having received disease-modifying therapy including allogeneic transplant. This is in contrast to a recently published large study of 4147 patients with MDS from the Düesseldorf MDS Registry that found an improved prognosis in both treated and untreated high-risk patients with MDS diagnosed after 2002, mainly because of reduction in deaths from infectious and bleeding complications due to improvement in supportive care. However, there was no difference in death due to LT in patients diagnosed before and after 2002. These ﬁndings underscore the importance of continual evaluation of novel therapies for MDS in a clinical trial setting. The limitations of our study include the retrospective study design and the long time span during which our patients were evaluated, with various pathologists having reviewed the original bone marrow biopsy slides that were not rereviewed for the purpose of this study.
Our study is an extremely comprehensive and valuable resource describing the natural history of primary MDS in patients evaluated and treated at Mayo Clinic over 25 years. As we begin to unravel the biology of the disease and evaluate novel therapies, it will provide a framework for our future studies involving molecular risk stratiﬁcation, determining genotype-phenotype correlations and genetic determinants of response to therapy.
Abbreviations and Acronyms:
HR=hazard ratio; IPSS=International Prognostic Scoring System; IPSS-R=Revised International Prognostic Scoring System; FAB=French American British; LT=leukemic transformation; MDAS=MD Anderson score; MDS=myelodysplastic syndrome; MK=monosomal karyotype; RAEB-1=refractory anemia with excess blasts-1; RCMD=refractory cytopenia with multilineage dysplasia; WHO=World Health Organization; WPSS=World Health Organization Prognostic Scoring System
Used with permission from Elsevier. This article was published in Mayo Clinic Proceedings: Gangat N, Patnaik MM, Begna K, et al: Primary Myelodysplastic Syndromes: The Mayo Clinic Experience With 1000 Patients. Mayo Clin Proc 2015 Dec;90(12):1623-1638 Copyright Elsevier (2015). References omitted. The complete article is available online at www.mayoclinicproceedings.org