TMCnet News

The Pacific Meridional Mode as an ENSO Precursor and Predictor in the North American Multimodel Ensemble [Journal of Climate]
[September 17, 2014]

The Pacific Meridional Mode as an ENSO Precursor and Predictor in the North American Multimodel Ensemble [Journal of Climate]


(Journal of Climate Via Acquire Media NewsEdge) ABSTRACT Although modeling and observational studies have highlighted a robust relationship between the Pacific meridional mode (PMM) and El Niño-Southern Oscillation (ENSO)-namely, that the PMM is often a precursor to El Niño events -it remains unclear if this relationship has any real predictive use. Bridging the gap between theory and practical application is essential, because the potential use of the PMM precursor as a supplemental tool for ENSO prediction has been implied but not yet implemented into a realistic forecast setting. In this paper, a suite of sea surface temperature hindcasts is utilized from the North American Multimodel Ensemble (NMME) prediction experiment between 1982 and 2010. The goal is first to assess the NMME's ability to forecast the PMM precursor and second to examine the relationship between PMM and ENSO within a forecast framework. In terms of model performance, results are optimistic in that not only is PMM variability captured well by the multimodel ensemble mean, but it also appears as a precursor to ENSO events in the NMME. In forecast mode, positive PMM events predict eastern Pacific El Niño events in both observations and model forecasts with some skill, yet with less skill for central Pacific El Niño events. Con- versely, negative PMM events poorly predict La Niña events in observations, yet the model forecasts fail to capture this observed representation. There proves to be considerable opportunity for improvement of the PMM-ENSO relationship in the forecast models; accordingly, the predictive use of PMM for certain types of ENSO events may also see improvement.



(ProQuest: ... denotes formula omitted.) 1. Introduction Within the past decade, studies outlining Pacific me- ridional mode (PMM) variability have become more frequent because of the strong connection between PMM and El Niño-Southern Oscillation (ENSO) in observations and certain coupled climate models (e.g., Chiang and Vimont 2004; Chang et al. 2007; Zhang et al. 2009a,b; Wu et al. 2010; Larson and Kirtman 2013). The potential for utilizing PMM variability as a supplemen- tal tool for ENSO prediction has been implied in such articles, yet a study investigating this potential within a climate prediction framework has not been performed. We present such results in this study.

PMM is low-frequency atmosphere-ocean coupled variability first discussed in Chiang and Vimont (2004) and found to be independent of ENSO yet of a similar interannual time scale. PMM has been linked to extratropical atmospheric variability, which tends to project onto the tropical Pacific SST via alterations of the trade wind easterlies that, in turn, affect latent heat fluxes at the air-sea interface (Vimont et al. 2003a,b). Physical characteristics of PMM include an anomalous SST gradient across the mean latitude of the ITCZ in the tropical eastern Pacific coupled with anomalous south- westerly winds extending from the equatorial date line to the Baja Peninsula. Anomalies are maximized in boreal spring with SST anomalies (SSTA) lagging wind anom- alies by approximately 1 month because of the slower response of SST to peak midlatitude atmospheric vari- ability in boreal winter (Chiang and Vimont 2004; Chang et al. 2007). Both positive and negative phases of PMM have been documented in observational studies (Chiang and Vimont 2004; Chang et al. 2007); however, the positive phase receives more attention because it often precedes and is hypothesized to trigger to El Niño events (Chang et al. 2007; Larson and Kirtman 2013). Positive phase PMM refers to the positive meridional SSTA gra- dient (from cool to warm moving northward) in the east- ern tropical Pacific. Through coupled model simulations, Wu et al. (2010) suggest that such warming in the north- eastern Pacific is due to a wind-evaporation-SST feedback originally generated by trade easterly variations induced by extratropical variability while the cooling in the eastern equatorial Pacific is attributed to enhanced upwelling.


The robust relationship between PMM and ENSO in observations was first reported in Chang et al. (2007) in which the authors find that over 70% of El Niño events occurring between 1958 and 2000 were preceded by positive phase PMM. Zhang et al. (2009a) find a similar robust relationship (66%) in the National Center for Atmospheric Research (NCAR) Community Climate System Model, version 3 (CCSM3). Larson and Kirtman (2013) show that not only does this relationship hold in a high-resolution, eddy-permitting version of CCSM41 (0.18 horizontal resolution) but also the atmospheric variability associated with PMM dominates over all other non-ENSO tropical variability globally in acting as a precursor to ENSO events. The authors also confirm that CCSM4 captures typical PMM characteristics, including SST and horizontal wind coupling and boreal spring phase locking. Therefore, PMM has been consistently found as a significant mode of variability in the tropical Pacific and as an ENSO precursor that is present in both observations and some coupled climate models and thus, we hypothe- size, also present in the North American Multimodel Ensemble (NMME) system forecasts.

The NNME system is a multi-institutional multimodel system designed to provide and improve upon intra- seasonal to interannual predictions (Kirtman et al. 2014). Further details are provided in the next section. The utility of the NMME system is widely diverse, ranging from real-time operational U.S. drought pre- dictions to climate research, including ENSO diversity (B. P. Kirtman et al. 2014, unpublished manuscript) and the prediction skill of multimodel precipitation forecasts over the southeastern United States (Infanti and Kirtman 2014). In this study, we utilize the suite of SST hindcasts from phase 1 of the NMME prediction experiment to assess the relationship between PMM and ENSO within the NMME climate prediction framework. This paper is organized as follows: First, the NMME prediction ex- periment is introduced. Second, we examine the March precursors to all moderate to strong El Niño events in observations and identify those with PMM signatures. Third, we assess the NMME forecast skill of said March PMM precursors at varying lead times. Next, we examine the role of PMM as an ENSO precursor and predictor in the NMME system and observations. Last, we test the robustness and sensitivity of the results.

2. The NMME system The NMME system is a newly formed multi- institutional collaborative effort to provide and improve upon intraseasonal to interannual (ISI) predictions. A full outline of the project and further details can be found in Kirtman et al. (2014). The effort is hinged upon the concept that multimodel ensemble mean forecasts are on average better than even the most skilled individual model alone. Large multimodel ensemble forecasts also allow for better quantification of uncertainty due to model formulation as well as minimize systematic model biases by providing ensemble mean forecasts from a large (in this case, over 100 members) group of en- semble members. The NMME is currently composed of nine institutional partners, all of which provide 9-month forecasts or longer. Phase 1 of NMME data is readily available online (e.g., http://iridl.ldeo.columbia.edu/ SOURCES/.Models/.NMME/) and includes approxi- mately30yr of hindcastsfor the period1982-2010.Fields provided in the phase-1 NMME hindcast database are limited to global SST, 2-m temperature, and pre- cipitation rate. As a result, the analysis methods in this study are carefully chosen to overcome this limitation.

3. ENSO precursors in observations As previously mentioned, Chang et al. (2007) show that more than 70% of ENSO events occurring between 1958 and 2000 were preceded by PMM events based on a maximum covariance analysis (MCA) between tropi- cal Pacific wind stress and SST observations that are not directly following ENSO events. The authors find that the lag correlation of the temporal variation of the wind stress and SST is maximized (0.7 correlation) when the wind stress leads the SST by 1 month. Although PMM is typically characterized by this anomalous covariability of the atmosphere (wind stress) and ocean (SST), be- cause the two are highly correlated, we use SST vari- ability alone in this study to capture PMM variability.

It should also be noted that in this paper we are em- phasizing what is arguably the more deterministic component of PMM, particularly that, although PMM can have a stochastic component that originates in the extratropics (Vimont et al. 2003a,b), PMM is tradition- ally defined by a distinct SST anomaly pattern in the tropical Pacific that may prove to be predictable. As such, we focus on this SST pattern, particularly so that we can exhaust its potential persistence and, as follows, its predictability. It is worth noting that not all PMM events exhibit this persistence characteristic and that the predictability of PMM events can be limited by the stochastic atmospheric component. For instance, one way this pattern can emerge is via the seasonal footprinting mechanism developed in Vimont et al. (2001, 2003a,b), which is also shown in Anderson (2003, 2004) and tested in Alexander et al. (2010). Therefore, although we discuss poor PMM forecasts in the context that the models are not appropriately persisting the SST pattern, it is also possible that the predictability of these particular events are ultimately limited by the inherent lack of pre- dictability of stochastically forced coupled responses.

For the verification of NMME hindcasts we use ob- servationally based estimates of SST from the National Oceanic and Atmospheric Administration (NOAA) extended reconstructed SST, version 3b (ERSST.v3b) dataset, which provides monthly global SST at 28 hori- zontal resolution. ERSST.v3b data are provided by the NOAA/Office of Oceanic and Atmospheric Research (OAR)/Earth System Research Laboratory (ESRL)/ Physical Sciences Division (PSD), Boulder, Colorado, from their website (http://www.esrl.noaa.gov/psd/). In this study we consider the period from 1982 to 2010, which is the common period for all but one NMME partner model. The December-February (DJF)-averaged SSTA for all moderate to strong El Niño events throughout this period are shown in the right-hand column in Fig. 1. Included are the 1982/83, 1986/87, and 1997/98 eastern Pacific (EP) El Niño events and the 1990/91, 1994/95, 2002/03, and 2004/05 central Pacific (CP) El Niño events; however, we do not discriminate between CP and EP events in the first part of this study and classify both types as El Niño events. The left-hand column in Fig. 1 shows the ENSO precursor present during the previous boreal spring from observations. For example, in the top of Fig. 1, the 1982/83 El Niño event is shown on the right and the 1982 precursor for said event is shown on the left. The ENSO precursor is defined as the non-ENSO SSTA (see below for definition) spatial distribution present during the March prior to El Niño events. March is chosen be- cause, as previously mentioned, peak PMM SSTA sig- natures occur in boreal spring. The non-ENSO SSTA is defined as linearly removing the contemporaneous Niño- 3.4 signal (SSTA averaged over 58S-58N, 1708-1208W) from the full SSTA field. The calculation is as follows: ... (1) where a is the regression coefficient of the Niño-3.4 index onto SSTA. This residual definition is necessary because the cool (warm) eastern Pacific SSTA associated with La Niña (El Niño) projects onto positive (negative) phase PMM (Chiang and Vimont 2004), which is a common ENSO precursor as previously mentioned [see also Larson and Kirtman (2013) for non-ENSO SSTA]. As shown in the center column of Fig. 1, the Niño-3.4 signal compo- nent removed from the SSTA during these particular years is small or near zero, suggesting that the likelihood of this component having a significant impact on the results presented here had it not been removed is very small. Additionally, because the Niño-3.4 component is so weak or negative, the signal of the oncoming El Niño warming is practically undetectable in the full SSTA field (left 1 center column) at this time, thus further moti- vating the utilization of precursors. One caveat of this definition is that the nonlinear ENSO component is not removed; how this affects the results presented in this paper will be highlighted and discussed.

Figure 1 shows that the 1982, 1986, 1990, 1994, and 1997 March precursors all display PMM signatures. This verifies that slightly over 70% of observed El Niño events are preceded by PMM events, as shown in Chang et al. (2007). The time window in this study differs from the previous observational study yet the results remain consistent, thus confirming the robustness of the PMM- ENSO relationship in observations. The 2004 non- ENSO SSTA shows no evidence of PMM signatures and the 2002 non-ENSO SSTA shows a weak projection of negative PMM. Although we only discuss PMM precursors to positive phase ENSO in this section, our results, as well as those in Zhang et al. (2009a,b),do suggest that the PMM-ENSO relationship is regime specific, meaning that positive PMM typically precedes El Niño events and negative phase PMM precedes La Niña. This point will be revisited in a later section.

For verification purposes, it is of interest to quantify the phase and magnitude of PMM for all years to obtain a single-value representation of PMM for each year and represent PMM temporal variability in observations. Although an MCA computation between wind stress and SST method is ideal for defining PMM (see Chiang and Vimont 2004; Chang et al. 2007), as previously men- tioned, we are limited to only SST. As a result, here we define PMM variability strictly from SST as the PMM projection onto the March precursor (non-ENSO SSTA). The PMM projection is defined as the spatial correlation between each March non-ENSO SSTA (e.g., Fig. 1,left) and a previously defined indicator of PMM SSTA sig- natures found in Larson and Kirtman (2013).TheLarson and Kirtman (2013) study shows that the composite of the March SSTA preceding large El Niño events (10 total) in a high-resolution version of CCSM4 shows robust PMM signatures as reproduced in Fig. 2a. The SSTA pattern is coupled with anomalous westerlies in the central Pacific subtropics consistent with PMM (not shown). Therefore, this PMM pattern is chosen because it is found to be rep- resentative of the SSTA signatures associated with cou- pled (wind stress and SST) PMM variability as well as it is derived from a model independent of the NMME suite.2 This helps reduce model biases with this particular part of the analysis. The predefined map is also chosen in- stead of one constructed from observations because it provides an independent ''baseline'' for PMM ampli- tude as opposed to handpicking the ''best representa- tive'' PMM year from observations, which could be considered subjective. As a result, all PMM projections calculated will be less than 1.0 because the predefined map is independent of all datasets analyzed in this study. There is slight sensitivity to the results when using an observed PMM composite map instead of the model- based map chosen here; however, the overall conclu- sions presented in this paper remain unchanged. A short discussion of the sensitivity of results is provided in section 6.

The time series of PMM variability based on observa- tions from 1982 to 2010 is shown in Fig. 2b and will be used to quantify NMME PMM forecast skill in the fol- lowing section. The observed PMM variability is defined as the spatial correlation between the observed non- ENSO SSTA in March and the PMM pattern from Larson and Kirtman (2013). Note that the general vari- ability in Fig. 2b closely resembles Fig. 1 in Chang et al. (2007), which shows the temporal variation of horizontal winds associated with PMM as found via the MCA ap- proach. The characteristic lower-frequency variability as- sociated with PMM is particularly evident from the 1990s to the present. It is evident that the 1982, 1986, 1990, 1994, and 1997 March precursors, all of which showed PMM signatures in Fig. 1, do correspond to the strong positive PMM projections in Fig. 2b and the Chang et al. (2007) approach, indicating that this method for quantifying PMM and PMM variability captures such characteristics.

4. PMM precursors in the NMME The emphasis of this paper is to assess whether PMM is actually a useful precursor in forecast mode, so in principle we need only have PMM signatures in the initial condition or very short lead forecasts. On the other hand, if PMM is a useful precursor and if it is predictable at longer leads then there is potential for even longer lead ENSO forecast skill. As such, in this section, 1-, 3-, and 6-month lead time forecasts are considered to assess the NMME's forecast skill for March PMM. It is important to remember that in this study, the term projection is a spatial projection and not a temporal projection. It is, however, inherently a pre- dictive quantity because the NMME PMM projections are calculated from the forecasted SST.

First, consider the NMME forecasts, which are cal- culated as the equally weighted average of all ensemble members. This is distinctly different from equally weighting each partner model, which in this case would refer to an average of the nine partner models' indi- vidual ensemble averages. Here, all 109 individual en- semble members have equal weight so that the plume of forecasts better resolves the probability distribution. Figure 3 shows the NMME PMM forecasts (solid line), the observed PMM (dashed line; same as Fig. 2b), and their difference (bars) for 1-, 3-, and 6-month lead times. Since March PMM is the forecast of interest, a 1-month lead time refers to March monthly averaged forecasts that are initialized in March. Since the 1-month lead March forecast is initialized in early March, we view this short lead forecast as a proxy for the initial condition. The 3-month lead time forecasts are initialized in January and 6-month lead time forecasts are initialized in October. The high forecast skill for the 1-month lead time NMME forecast is expected, with a correlation between the NMME forecast and the observed of 0.96. The NMME performs very well in capturing the low-frequency var- iability from the 1990s to the present as well as the rapid onset and decay of the 1986 event.

Figure 4 shows a breakdown of the forecast skill (correlation between the model forecast and observed) for each lead time, individual partner model ensemble mean, and the NMME mean. All partner models per- form well at 1-month lead time (blue circles in Fig. 4), with high correlations ranging from 0.81 [Geophysical Fluid Dynamics Laboratory Climate Model, version 2.2 (GFDL CM2.2)] to 0.96 [Canadian Meteorological Centre (CMC) Third Generation Canadian Coupled Global Climate Model (CanCM3)], and it can be con- cluded that all models considered in this study capture the PMM state well at very short lead times.

Figure 3b and the red circles in Fig. 4 show that 3-month lead time forecasts also have relatively high correlation (0.71) between the NMME mean forecasts and the observed estimates. Furthermore, all partner models have correlations over 0.5 with the observed, ranging from 0.55 [Climate Forecast System, version 2 (CFSv2)] to 0.73 (CFSv1). In particular, the CCSM3, International Research Institute for Climate and Society (IRI) ECHAM4.5-Anomaly Coupled (IRI-AC), and CFSv2 models are the least skilled at the 3-month lead time. Similar to Fig. 3b, Fig. 5 shows each partner model's ensemble mean 3-month lead time forecast and suggests that this is due to underforecasting the ampli- tude of events, which at some but not all times can be linked to the discrepancy in the phase forecasted by the individual ensemble members.

As also seen in Fig. 5, the models have difficulty forecasting the amplitude and persistence of the nega- tive PMM event observed from the late 1990s through the early 2000s. Nevertheless, a few models (e.g., CanCM4 and CFSv1) and the NMME perform fairly well during this period. Only the NMME correctly forecasts the amplitude of the rapid-onset 1986 positive PMM event, although all models forecast the correct phase. All models also capture the sharp transition from positive to negative phase PMM between 1997 and 1998. Such a result is not surprising because the 1997/98 El Niño event was very strong, thus producing a large nonlinear positive ENSO component that would not be removed via the linear regression methods presented here in de- fining the ENSO precursor. Since the forecasts are ini- tialized in January during peak El Niño, the models, as they have a tendency to do in such a circumstance, per- sist the warm SSTA signal well through boreal spring, resulting in strong projection onto the negative PMM phase in March and PMM projection forecasts that verify closely with the observations.

In addition, all models, including the NMME mean, perform poorly on a few occasions with a 3-month lead, particularly the 1995 event and, to a lesser extent, the 2003 event. In fact, the 1995 forecast is the most poorly forecasted PMM event during the 1982-2010 period for all models. For example, CanCM3, as well as others, forecasts a PMM event for March 1995 but of the incorrect negative phase, despite being one of the better-performing models in this study. This is more easily viewed in Figs. 6d-f,whichshowtheCanCM3 January-March precursorSSTAfromtheJanuary initialized forecasts. As is evident, this particular model ensemble forecasts a warm-cool meridional SSTA gradient in the tropical eastern Pacific, in- dicative of weak negative phase PMM, whereas Figs. 6j-l show that the observed precursor is a steadily amplifying positive phase PMM event from January through March.

The poor forecast skill for the 1995 event can likely be explained by model biases as noted by B. P. Kirtman et al. (2014, unpublished manuscript). January 1995 is a strong CP El Ni ño event; therefore, the 1995 January initialized forecasts have initial states that look very similar to the January 1995 SSTA as shown in Fig. 6g. The warm SSTA perturbation in the central Pacific quickly induces persistent eastern Pacific warming in most of the models and extends well through March (Figs. 6a-c) while, in reality, the warming remains and persists only in the central Pacific (Figs. 6g-i). B. P. Kirtman et al. (2014, unpublished manuscript) show that this type of behavior is typical for the NMME models: specifically, that the models have difficulty forecasting CP El Niño events because they tend to propagate the central Pacific warming to the eastern Pacific fairly quickly, thus resulting in an incorrectly forecasted EP El Niño event. Accordingly, the persistent eastern Pacific warming in the models produces an eastern Pacific SSTA gradient of incorrect sign, thus projecting onto negative phase PMM and resulting in the large discrepancy be- tween the 1995 March PMM model forecasts and the observed positive PMM event shown in Fig. 5. A similar response occurs for the 2003 PMM event but to lesser extent because the 2003 CP event is weaker.

The forecast skill quickly declines between the 3- and 6-month lead time forecasts. The 6-month NMME PMM forecasts correlate modestly with the observations at 0.47 (Figs. 3c and black circles in Fig. 4) and many of the forecasts are of either incorrect phase or falsely neutral. Some models, particularly CCSM3, GFDL CM2.2,and IRIECHAM4.5-DirectCoupled (IRI-DC), lose most forecast skill by the 6-month lead while others, including CanCM4 and the National Aeronautics and Space Administration (NASA) Goddard Earth Ob- serving System Model, version 5 (GEOS-5), remain fairly skilled at 6 months with correlations greater than 0.5 with the observations. Notably, the 6-month fore- casts are considerably better than the 3-month forecasts for the 1995 PMM event, although such a result is ex- pected because the October initialized forecasts are not initialized with the strong central Pacific SSTA pertur- bation as discussed above with the 3-month forecasts.

Also shown in Fig. 4 are the NMME persistence forecasts for the various lead times. The persistence forecast assumes that the March PMM projection fore- cast is the same as the PMM projection during the ini- tialization month, or more simply, the initialized PMM projection persists through March. The persistence forecast skill for all lead times is very similar to the dy- namical forecasts. For the 1-month lead time, the skill is the same because the initialization month is also the forecast month. This result is hopeful in that the dy- namical forecast skill is at least as good as the persis- tence forecasts, although ideally the dynamical forecast skill would surpass that of the persistence forecasts.

Typical with ensemble forecasts, a few models tend to slightly outperform the ensemble mean in terms of correlation to observations. Such is the case with the CanCM4 and NASA models as shown in Fig. 4, partic- ularly at the 6-month lead (black circles). In addition to correlation with observations, however, we also consider the skill metric root-mean-square error (RMSE). RMSE is a useful skill metric to quantify the ''correctness'' of a forecast because it is unconstrained by linear fitting, unlike correlation, and also presents the error in the same units as the forecast of interest. Figure 7 shows the RMSE (filled circles) and ensemble spread3 (open tri- angles) for each partner model and the NMME mean for 1- (blue), 3- (red), and 6-month (black) lead time fore- casts. The dashed lines indicate the NMME ensemble mean RMSE and are provided solely as a reference line for easier comparison to the individual models. For all lead times, the NMME forecast sits around the middle in terms of RMSE, which is not surprising because both lesser and higher skilled individual ensemble members contribute equally in the calculation. The skill separa- tion between the three different lead times for some models, particularly CCSM3 and GFDL CM2.2, is fairly evenly spaced, indicating that skill is consistently gained (lost) by shortening (extending) the forecast lead time. On the other hand, most models, including the NMME ensemble mean, lose more skill between the 1- and 3- month forecasts than between the 3- and 6-month forecasts. In particular, IRI-DC has similar skill for both the 3- and 6-month forecasts, which was not evident in the correlations seen in Fig. 4; therefore, little skill is gained until shortening the lead time to 1 month.

Although a few of the partner models are more skilled than the NMME ensemble mean in terms of RMSE, one important factor to consider is how well the forecast system is calibrated. In this regard, the ensemble spread (open triangles in Fig. 7) is a useful forecast metric in that, if the system is well calibrated, the RMSE and the spread compare closely. Having a well-calibrated system is beneficial because then the spread can be used to es- timate the RMSE; however, if the spread is considerably smaller than the RMSE (e.g., the CCSM3 and GFDL CM2.2 1-month lead time forecasts), the spread signifi- cantly underestimates the uncertainty in the forecast. Kirtman et al. (2013) show that the NMME system is particularly beneficial for this reason in that the NMME ensemble spread captures the uncertainty in ENSO forecasts fairly well. Similar results are seen with the PMM forecasts. For the most part, the partner models considered here are not well calibrated in this particular field and tend to be overconfident in their forecasts; however, the full NMME system does appear to be fairly well calibrated at 1- and 3-month lead times.

Overall, from Figs. 4 and 7 we conclude that the 1- and 3-month forecasts are reasonable PMM predictions for March but that the skill of the 6-month forecasts is in- sufficient for further discussion in this paper. Since the full NMME ensemble is the best-calibrated system for PMM considered here and therefore is arguably the best suited for prediction purposes, individual partner models will no longer be discussed separately but instead all individuals are considered ensemble members. There will, however, be a distinction between EP and CP El Niño events and the associated precursors in the fol- lowing section.

5. PMM as a precursor to ENSO in the NMME To investigate the PMM-ENSO relationship in the NMME system, longer forecast periods are necessary to 1) assess the PMM forecast in March and 2) assess the ENSO forecast the following December. Therefore, only partner models that provide 12-month forecasts are considered in this section so that we can utilize the 1- (March initialized) and 3-month (January initialized) lead time PMM forecasts as well as the subsequent December ENSO forecasts. Four partner models satisfy this criterion-namely, CCSM3, GFDL CM2.2, CanCM3, and CanCM4-and the ensemble size is reduced accord- ingly to 36 ensemble members. While including all models is preferable, this subset, which includes two of the higher skilled models and two of the lesser skilled models, is a fair representation of the average forecast skill of the full ensemble.

EP and CP El Niño events are analyzed separately to allow for comparison between the PMM precursor re- lationship and both types of events in observations and model forecasts. La Niña events-namely, the 1988/89, 1999/2000, 2000/01, and 2007/08 events-are also in- cluded to examine the previously suggested regime- specific relationship between PMM and EP ENSO events. To differentiate between EP and CP events, in- dices are computed. The amplitude of EP events is quantified by the Niño-3 index (SSTA averaged over 58S-58N, 1508W-908W), where negative values identify La Niña events and positive values identify El Niño events. CP events are defined by an alternative index proposed in Lopez and Kirtman (2013). The idea is that the El Niño Modoki index (EMI; see Ashok et al. 2007), which is often used as an index for CP El Niño events, is derived from empirical orthogonal function (EOF) analysis of observed SSTA and that similar EOFs cal- culated from model SSTA may differ considerably in structure. Considering that SST forecasts from four dif- ferent models are analyzed the section, accounting for variations in CP structure is important in attempting to capture this particular SST mode specific to each model. In short, the CP index is based on partial regression: EOF analysis. Prior to EOF analysis, all SSTA vari- ability correlated to the Niño-3 index but not the EMI is removed because often anomalies associated with EP and CP events overlap and this method prevents events from being categorized as both. Then the EOF analysis is performed and the first principal component represents the CP index (CP-PC1). EP and CP ENSO forecasts for the month of December are considered because it allows for the longest-term forecast for boreal winter SST common to both initialization times. As before, PMM is quantified by the PMM projection onto the March pre- cursor SSTA. Such relationships between PMM and ENSO events are shown in Fig. 8.

Figures 8a,c show the relationship between PMM and EP events for the January and March initialized fore- casts. Red crosses show the model forecasts for the PMM precursor and the 1982, 1986, and 1997 December Niño-3, which correspond to the observed 1982/83, 1986/87, and 1997/98 EP El Niño events highlighted in the pre- vious sections. Blue crosses are similar but for the 1988/89,1999/2000,2000/01, and 2007/08 LaNiña events and black crosses are the forecasts for all other years. Therefore, all December Niño-3 and March PMM forecasts from all 36 members from 1982 to 2010 are shown. Bold squares indicate the mean forecasts for each respective category and black filled circles show the NMME forecasts for the observed La Niña and El Niño years. Black open circles indicate the observed Niño-3 and PMM for the observed La Niña and El Niño years.

A regime-specific precursor relationship between PMM and EP ENSO is evident in both the March ini- tialized forecasts (1-month lead for PMM and 10-month lead for Niño-3) and January initialized forecasts (3-month lead for PMM and 12-month lead for Niño-3). The relationship is less pronounced in the January ini- tialized forecasts because of increased forecast un- certainty that can be attributed to increased spread associated with the longer lead time forecasts. The ob- served EP El Niño years tend to have forecasts in the upper right quadrant, meaning that the models typically forecast both positive PMM and EP El Niño events for years in which strong El Niño events are observed (red square). Analogously, observed La Niña years, on aver- age, have forecasts in the bottom left quadrant, meaning that the models tend to forecast both negative PMM and La Niña events for years in which La Niña events are observed (blue square). All other years (black square) show no clear relationship with PMM. The NMME mean forecasts (black filled circles) capture this re- lationship fairly well, particularly in the March initialized forecasts for EP El Niño years; however, the ensemble mean Niño-3 forecasts underestimate the intensity of the high-amplitude 1982/83 and 1997/98 events. For the ob- served La Niña years, it appears that only two La Niña events considered here are preceded with negative PMM in observations and the ensemble mean appears to forecast these events well. On the other hand, PMM and ENSO forecasts for the other La Niña years are less skilled. Overall, PMM appears to be a potentially useful tool to enhance confidence in EP ENSO forecasts be- cause of the regime-specific precursor relationship found between PMM and Niño-3.

A less robust relationship is seen between positive PMM and CP El Niño years (Figs. 8b,d). Although there appears a slight relationship between PMM and CP years, the separation between CP years and non-CP years is fairly small compared to the pronounced re- lationship seen with the EP years. This is possibly, in part, due to the previously mentioned model caveat discussed by B. P. Kirtman et al. (2014, unpublished manuscript) in which the models tend to favor EP events over CP events; however, such an unclear relationship is seen in observations as well (open circles), considering that two of the four observed CP years show weak PMM projection during the previous March. The NMME forecasts for the four CP years demonstrate that PMM may not be a useful precursor for CP El Niño forecasts, especially considering that there also appears no clear relationship between PMM and CP phase in observations.

It should be pointed out that these results rely on the approach that we look for precursors prior to the veri- fied events of interest. In this regard, PMM precursors are identified with a ''hindsight approach'' in that we look for PMM based on the prior knowledge that an ENSO event did, in fact, occur. In this sense, the PMM precursor can be considered a dependent variable. This is distinctly different from the ''forecast mode ap- proach'' presented in the next section in that PMM is considered an independent predictor and the reliability of the predictor is assessed. In this sense, a reliable precursor need not and should not also be assumed a reliable predictor.

6. PMM as an ENSO predictor To quantify the robustness of using PMM to predict ENSO events within the NMME system and observations, we calculate a ''percent correct'' metric. Essentially, the percent correct metric quantifies how well the fore- casted or observed PMM predicts the forecasted or observed ENSO index. For example, for all ensemble members that forecast positive sign PMM in March, we calculate the percentage of those ensemble members that also forecast positive Niño-3 in December and separately the percent of those members that correctly predict the observed Niño-3 sign. The procedure is re- peated for negative phases, CP events, and observations. Model results are presented as the values outside of parentheses in Table 1, and results from observations for the NMME hindcast period are in Table 2 with the identifier ''NMME.'' The Niño-3.4 predictions shown in Tables 1 and 2 as well as the results in Table 2 for the extended observed record length are discussed in section 7. The ''no skill'' mark for these values is 50% assuming that PMM and ENSO are independent of each other. For instance, given a particular PMM fore- cast, random chance dictates that the probability of the ENSO forecast being positive or negative is 50% and 50%, respectively. Note that we are assuming that pos- itive (negative) phase PMM predicts positive (negative) phase ENSO.

To remove both weak and neutral PMM projections and ENSO events, the analysis is repeated for events that fall within the upper (for El Niño events) or lower (for La Niña events) terciles only and correspond to the parenthesized values in Tables 1 and 2. Terciles are defined as ranking the data and then partitioning the data into thirds. The percent correct is calculated as the percentage of PMM projections that fall within the up- per (lower) tercile that correctly predict an ENSO event that also falls within the upper (lower) tercile. Therefore, we are considering the skill of strong PMM projections predicting strong ENSO events. The no skill mark for these values is 33% given that random chance dictates a 33% probability that the forecasted ENSO event will fall within the respective tercile, given an independent predictor, in this case PMM. Therefore, percent correct values greater (lesser) than 33% indicate that skill is acquired (lost) when utilizing the PMM predictor com- pared with random chance. These values can be con- sidered event predictions while the values from the method in the previous paragraph can be considered sign predictions.

The robust precursor relationship between PMM and EP ENSO in the model forecasts is not necessarily grounds for use as a forecast tool unless all events in which PMM is in the upper or lower tercile are consid- ered and the forecast skill is significantly greater than the 33% no-skill threshold. In observations (Table 2) for the NMME hindcast period, although positive sign PMM does not predict positive sign Niño-3 with any skill (47%), positive PMM events (i.e., upper tercile PMM projection) correctly predict EP El Niño events (i.e., upper tercile Niño-3 index) 60% of the time. January and March initialized forecasts of positive PMM events verify similarly with observed EP El Niño: 52% and 56%, respectively (Table 1). On the other hand, less skill is seen with the model forecasts in that the forecasted PMM events correctly predict the forecasted EP El Niño events slightly less than 50% of the time for both sets of initialized forecasts. Therefore, the model forecasts do not capture the full predictive potential of PMM for EP El Niño events that is seen in observations and so there is room to improve this relationship in the forecast models to better utilize the predictive potential of PMM for EP El Niño events.

In observations (Table 2), although negative sign PMM predicts negative sign Niño-3 well (71%), negative PMM events predict La Niña events with less skill (25%) than random chance alone and negative PMM event forecasts predict observed La Niña events with practi- cally no skill (Table 1; 34% for both initialization times). Considering the model ENSO forecasts, however, neg- ative PMM event forecasts correctly predict La Niña event forecasts with skill comparable to that of El Niño (48% and 52% for January and March initialized fore- casts, respectively). Unfortunately, this shows that the weak relationship between PMM events and La Niña events found in observations is not captured well by model forecasts and too often, the models forecast La Niña events following negative PMM events. Therefore, there is also opportunity to improve this representation in the models despite the fact that negative PMM events are not a useful predictor of La Niña events based on the analysis methods presented here.

Observed positive sign PMM predicts observed posi- tive sign CP index well (73%); however, observed pos- itive PMM events show only slight skill in predicting observed CP El Niño events (40%) and this measure is reproduced fairly well by the model forecasts. It follows that in a hindsight or precursor view, where we first identify ENSO years and then look for the precursors, PMM appears to correctly predict particular types of ENSO events quite well. In forecast mode, however, using PMM as a predictor for EP El Niño events shows some promise whereas, for La Niña events and CP El Niño events, little to no skill is gained.

It should be noted that the percent correct for ob- served positive PMM events predicting observed posi- tive ENSO events (both EP and CP type) are somewhat sensitive to the PMM base map chosen in the analysis. The sign predictions remain consistent. For instance, when using the model-based PMM map, observed pos- itive PMM events predict EP El Niño events 60% of the time; however, when using the observationally based map, the percent correct is reduced to 50%. For CP events, the model-based PMM map yields a 40% value whereas the observationally based map yields a 50% value. The percent correct for observed negative PMM predicting observed La Niña is changed by no more than 2% for both sign and event predictions. Results re- garding forecasted PMM predicting the forecasted and observed ENSO fall within a 5% window compared to using the model-based PMM map. Therefore, the overall conclusions and assessment of how well the model PMM forecasts predict forecasted and observed ENSO events remain unchanged. This discrepancy for the observed warm events is likely a result of the small sample size. In the following section we attempt to strengthen the re- sults by testing their robustness.

7. Extending the observational dataset Two methods are employed to increase the robustness and test the sensitivity of the observed predictions: 1) calculations are repeated for PMM predicting the Niño-3.4 SST anomaly index and 2) the observed per- cent correct calculations are repeated with an extended record from 1950 to 2012. First, Niño-3.4 is added to Tables 1 and 2 in an attempt to capture both EP and CP events into one index, thus allowing for a single pre- dictand for the one predictor, PMM. The number of PMM events considered remains the same but the pre- dictand is less discriminatory, thus potentially allowing for more ENSO events to meet the particular thresh- olds. Only the event predictions are discussed below.

For the model forecasts, results for positive PMM events predicting positive Niño-3.4 events are fairly similar to the Niño-3 results. This is unsurprising con- sidering that the models tend to favor EP over CP El Niño events and, in the observations, higher-amplitude positive ENSO events tend to be of the EP type. Both factors likely bias the upper tercile Niño-3.4 values to- ward EP events. For La Niña events, the percent correct is larger by nearly 10% for both March and January initialized forecasts when considering negative PMM events predicting negative observed Niño-3.4 events compared to Niño-3. Therefore, results are sensitive to the SST anomaly index considered when defining neg- ative ENSO.

For the observed predictions in Table 2, positive PMM events predict EP El Niño events 60% of the time, whereas it is only 40% for CP events and 50% for Niño- 3.4. In this case, a less discriminatory predictand does not necessarily increase the skill of the predictor, possibly because the number of PMM events remains the same. For La Niña events, the skill increases moderately from 25% to 38% when using Niño-3.4 instead of Niño-3; however, the skill is only slightly above the no-skill threshold.

Second, the observed record is extended beyond the NMME hindcast period to 1950-2012 and the observed metrics are recalculated. Results are in Table 2 with the identifier ''extended.'' Extending the record length has little effect on negative PMM predicting negative ENSO and event predictions using both the Niño-3 and Niño- 3.4 predictand show little skill above random chance alone. The predictability for EP El Niño events is re- duced from 60% to 48% when extending the record, whereas CP events show an increase from 40% to 57%. The reduction in EP El Niño skill is consistent with findings from Wang et al. (2013) in which the authors find that during more recent decades, the PMM pre- cursor relationship with El Niño is stronger. Therefore, the NMME hindcast years encompass decades where PMM is arguably expected to be a better EP El Niño predictor compared to prior decades. As such, when extending the record to 1950, the skill is reduced.

Results for Niño-3.4 events, both positive and nega- tive phase, are consistent when extending the record length, suggesting that considering nondiscriminatory ENSO events provides less sensitive results. Comparing the 48% for positive Niño-3.4 events from the extended observed dataset with the model forecasts for Niño-3.4, there is a fair amount of consistency between the two. In contrast, the models do not accurately capture the weak PMM-ENSO relationship for La Niña events as was previously discussed. Overall, however, for positive ENSO events, our analyses suggest that, even when ex- tending the observational record and not discriminating between CP and EP events, PMM has some skill as an ENSO predictor in observations.

8. Discussion The motivational factor behind this work is that sev- eral ENSO precursors are routinely discussed in the literature and many are used to make statistical ENSO forecasts. In theory, all precursors should be in the dy- namical prediction systems; however, the utility of pre- cursors in dynamical forecasts is ultimately limited by how well the models represent these precursors and the associated physical interactions. If a particular precursor is important, it is of practical importance that we are examining how well the current state-of-the-art forecast systems capture them and work to improve this repre- sentation. The ultimate goal is that we can identify the precursors that are more useful as predictors of ENSO events and utilize them to enhance confidence to fore- casts. Currently, most precursor research takes on a more hindsight approach, in that an ENSO event is first observed in nature or identified in a model simu- lation and then precursors are identified. Instead, in a prediction or forecast mode approach as shown here, the precursor is viewed as an independent predictor and the reliability of the precursor is assessed.

The present study shows that PMM variability is captured well by the NMME system at both 1- and 3-month lead times and that PMM often is a precursor to ENSO events in the models. Perhaps most interestingly, we find that observed positive PMM events show promise as a predictor of observed EP El Niño events but less skill as a predictor of CP El Niño events, whereas observed negative PMM events show no skill at predicting La Niña events. In addition, utilizing a less discriminatory ENSO index like Niño-3.4 produces similar results that are also less sensitive to the time period examined. Nevertheless, the observed relationships are not necessarily repro- duced well by the models. We should also stress that the results presented in this paper are quite limited by the observational dataset and that similar analyses should be repeated as more observational data as well as NMME forecasts become available.

One way to possibly enhance PMM's skill as an ENSO predictor is to consider only ''preconditioned'' PMM events, considering that several studies show that the effectiveness of PMM-like precursors may depend on whether the equatorial Pacific is also primed with anomalous heat content buildup (Anderson 2007; Deser et al. 2012; Larson and Kirtman 2013). In terms of the models themselves, there is a notion that climate models are not noisy enough. Subgrid kinetic energy is lost throughout the model integration, but this energy can be added via nonlinear subgrid dynamical schemes that project the energy onto resolved scales. To better repre- sent subgrid stochastic processes, Berner et al. (2008) apply this method in the European Centre for Medium- Range Weather Forecasts (ECMWF) coupled model and finds marked improvements in tropical Pacific seasonal forecasts. As such, we speculate that the models are underestimating the noisiness in the climate system and that increasing the noisiness in the models could im- prove the models' representation of PMM.

This work exemplifies the common misconception that a reliable precursor is also a reliable predictor in that PMM does not predict La Niña with any skill. Therefore, the most important conclusion of this paper is that when using a forecast framework, even though a precursor -in this case, PMM-is shown to be reliable using the hindsight or precursor approach, it is not necessarily also always useful as a predictor in forecast mode. Nevertheless, we find the results concerning EP El Niño events optimistic in that, with better representation of the observed PMM-ENSO relationship in the models, PMM could serve as a confidence-enhancing tool for this type of ENSO forecasts.

Acknowledgments. We wish to thank Bruce Anderson and two anonymous reviewers for providing insightful comments and suggestions that greatly improved the quality of the manuscript. Funding from NSF (AGS1137911) and from NOAA (NA10OAR4310203 and NA12OAR4310089) supported this research. The authors also acknowledge computational support from the University of Miami Center for Computational Sci- ence and the National Center for Atmospheric Research.

1 The version of CCSM used in Larson and Kirtman (2013) is a precursor release of the official version of CCSM4.

2 It should be noted that CCSM3 is part of the NMME forecasts, but the model (CCSM4) used for the base PMM pattern in Larson and Kirtman (2013) has substantially different parameterized physics and resolution and very different ENSO behavior.

3 Ensemble spread is defined as the root-mean-square difference between all possible combinations of ensemble forecasts at a given lead time.

REFERENCES Alexander, M. A., D. J. Vimont, P. Chang, and J. D. Scott, 2010: The impact of extratropical atmospheric variability on ENSO: Testing the seasonal footprinting mechanism using coupled model experiments. J. Climate, 23, 2885-2901, doi:10.1175/ 2010JCLI3205.1.

Anderson, B. T., 2003: Tropical Pacific sea-surface temperatures and preceding sea-level pressure anomalies in the subtropical North Pacific. J. Geophys. Res., 108, 4732, doi:10.1029/ 2003JD003805.

_____, 2004: Investigation ofa large-scalemode of ocean-atmosphere variability and its relation to tropical Pacific sea surface temperature anomalies. J. Climate, 17, 4089-4098, doi:10.1175/ 1520-0442(2004)017,4089:IOALMO.2.0.CO;2.

_____, 2007: On the joint role of subtropical atmospheric variability and equatorial subsurface heat content anomalies in initiating the onset of ENSO events. J. Climate, 20, 1593-1599, doi:10.1175/JCLI4075.1.

Ashok, K., S. Behera, A. S. Rao, H. Y. Weng, and T. Yamagata, 2007: El Niño Modoki and its possible teleconnection. J. Geo- phys. Res., 112, C11007, doi:10.1029/2006JC003798.

Berner, J., F. J. Doblas-Reyes, T. N. Palmer, G. Shutts, and A. Weisheimer, 2008: Impact of a quasi-stochastic cellular au- tomaton backscatter scheme on the systematic error and seasonal prediction skill of a global climate model. Philos. Trans. Roy. Soc., 366A, 2559-2577, doi:10.1098/rsta.2008.0033.

Chang, P., L. Zhang, R. Saravanan, D. J. Vimont, J. C. H. Chiang, L. Ji, H. Seidel, and M. K. Tippett, 2007: Pacific meridional mode and El Niño-Southern Oscillation. Geophys. Res. Lett., 34, L16608, doi:10.1029/2007GL030302.

Chiang, J. C. H., and D. J. Vimont, 2004: Analogous Pacific and Atlantic meridional modes of tropical atmosphere- ocean variability. J. Climate, 17, 4143-4158, doi:10.1175/ JCLI4953.1.

Deser, C., and Coauthors, 2012: ENSO and Pacific decadal vari- ability in the Community Climate System Model version 4. J. Climate, 25, 2622-2651, doi:10.1175/JCLI-D-11-00301.1.

Infanti, J. M., and B. P. Kirtman, 2014: Southeastern U.S. rainfall prediction in the North American Multi-Model Ensemble. J. Hydrometeor., 15, 529-550, doi:10.1175/JHM-D-13-072.1.

Kirtman, B. P., and Coauthors, 2014: The North American Multimodel Ensemble: Phase-1 seasonal to interannual prediction; Phase-2 toward developing intraseasonal pre- diction. Bull. Amer. Meteor. Soc., 95, 585-601, doi:10.1175/ BAMS-D-12-00050.1.

Larson, S., and B. Kirtman, 2013: The Pacific meridional mode as a trigger for ENSO in a high-resolution coupled model. Geo- phys. Res. Lett., 40, 3189-3194, doi:10.1002/grl.50571.

Lopez, H., and B. P. Kirtman, 2013: Westerly wind bursts and the diversity of ENSO in CCSM3 and CCSM4. Geophys. Res. Lett., 40, 4722-4727, doi:10.1002/grl.50913.

Vimont, D. J., D. S. Battisti, and A. C. Hirst, 2001: Footprinting: A seasonal link between the mid-latitudes and tropics. Geophys. Res. Lett., 28, 3923-3926, doi:10.1029/2001GL013435.

_____, _____, and _____, 2003a: The seasonal footprinting mechanism in the CSIRO general circulation models. J. Climate, 16, 2653- 2667, doi:10.1175/1520-0442(2003)016,2653:TSFMIT.2.0.CO;2.

_____, J. M. Wallace, and D. S. Battisti, 2003b: The seasonal foot- printing mechanism in the Pacific: Implications for ENSO. J. Climate, 16, 2668-2675, doi:10.1175/1520-0442(2003)016,2668: TSFMIT.2.0.CO;2.

Wang, S.-Y., M. L'Heureux, and J.-H. Yoon, 2013: Are green- house gases changing ENSO precursors in the western North Pacific? J. Climate, 26, 6309-6322, doi:10.1175/JCLI- D-12-00360.1.

Wu, S., L. Wu, Q. Liu, and S.-P. Xie, 2010: Development processes of the tropical Pacific meridional mode. Adv. Atmos. Sci., 27, 95-99, doi:10.1007/s00376-009-8067-x.

Zhang, L., P. Chang, and L. Ji, 2009a: Linking the Pacific meridi- onal mode to ENSO: Coupled model analysis. J. Climate, 22, 3488-3505, doi:10.1175/2008JCLI2473.1.

_____, _____, and M. K. Tippett, 2009b: Linking the Pacific meridi- onal mode to ENSO: Utilization of a noise filter. J. Climate, 22, 905-922, doi:10.1175/2008JCLI2474.1.

SARAH M. LARSON AND BEN P. KIRTMAN Rosenstiel School of Marine and Atmospheric Science, University of Miami, Miami, Florida (Manuscript received 13 January 2014, in final form 13 June 2014) Corresponding author address: Sarah M. Larson, RSMAS/MPO, 4600 Rickenbacker Causeway, Miami, FL 33149.

E-mail: [email protected] (c) 2014 American Meteorological Society

[ Back To TMCnet.com's Homepage ]