TMCnet News

A Fast Inverse Algorithm Based on the Multigrid Technique for Cloud Tomography [Journal of Atmospheric and Oceanic Technology]
[July 24, 2014]

A Fast Inverse Algorithm Based on the Multigrid Technique for Cloud Tomography [Journal of Atmospheric and Oceanic Technology]


(Journal of Atmospheric and Oceanic Technology Via Acquire Media NewsEdge) ABSTRACT A fast inverse algorithm based on the half-V cycle scheme (HV) of the multigrid technique is developed for cloud tomography. Fourier analysis shows that the slow convergence problemcaused by the smoothing property of the iterative algorithm can be effectively alleviated in HV by performing iterations from the coarsest to the finest grid. In this way, the resolvable scales of information contained in observations can be retrieved efficiently on the coarser grid level and the unresolvable scales are leftas errors on the finer grid level. Numerical simulations indicate that, compared with the previous algorithmwithoutHV(NHV), HV can significantly reduce the runtime by 89%-96.9% while retaining a similar level of retrieval accuracy. For the currently feasible two-level flight scheme for a 20-km-wide target area, convergence can be accelerated from 407 s in NHV to 13 s in HV. This reduction in time would be multiplied several times if the target area were much wider; but then segmental retrievalwould be required to avoid exceeding the time limit of cloud tomography.This improvement represents an important saving in terms of computing resources and ensures the real-time application of cloud tomography in a much wider range of fields.



1. Introduction The spatial distribution of liquid water content (LWC) in clouds plays very important roles in weather forecasting, climate modeling, and weather modification, among other areas. Taking the latter of these, weather modification, as an example, this technique has been widely operated in China but is still in urgent need of scientific guidance and confir- mation (Young 1996). The Bergeron-Findeisen mechanism is the theoretical basis for weather modification, including precipitation enhancement and hail suppression. According to this theory, the recognition and quantitative measure- ment of cloud LWC are essential to determine the seeding area and concentration of seeding substances, such as dry ice or silver iodide.

Compared with in situ measurements and active remote sensing, cloud tomography has distinctive advantages in detecting the spatial distribution of LWC (Warner and Drake 1985; Huang et al. 2008). Recent research (Zhou et al. 2013, hereafter Z2013) showed that the conventional single-level detection scheme first proposed by Warner and Drake (1985) leads to insufficient observations in the vertical direction. This lack of vertical information seri- ously affects the retrieval accuracy. A new multilevel detection scheme was proposed by Z2013 to solve this problem, and it proved to be very effective in improving the retrieval accuracy. The detection scheme is accom- plished by using several aircraft to fly through the target area simultaneously at different altitudes. Referred to as the multiaircraft flight (MAF) scheme, it forms the basis of the study reported in the present paper.


However, the retrieval algorithm of cloud tomography is much more sophisticated and time consuming than in situ measurements and active remote sensing. Zhou et al. (2011) pointed out that a rise in total LWC makes the nonlinearity of the objective function stronger. Lineari- zation of the objective function would lead to model errors that slow down the convergent rate and even make the iteration divergent. Although the direct application of the nonlinear optimization algorithm can speed up the con- vergence rate to some extent, the runtime of the retrieval is still too long for real-time application. For example, in a 20-km-wide target area with about 250 variables ob- served by a two-level flight scheme, the retrieval algorithm developed by Z2013 may cost more than 6 min. This time cost is multiplied several times if the target area is much wider, and segmental retrieval is required to avoid ex- ceeding the time limit of cloud tomography.

Long runtimes of the inverse algorithm not only con- sume computational resources but also limit the real-time application of cloud tomography in many fields, such as short-term forecasting and weather modification. For example, in weather modification, observations and pro- cessing for cloud tomography must be carried out over very short periods, during which the LWC distribution in the observed cloud could be taken as roughly invariant. Otherwise, using cloud tomography results to guide the seeding operation will be meaningless. So, what is the permissible range of time? Cloud tomography field data to use as a reference to help answer this question are quite limited. Nevertheless, one example of an experiment to gather such data was performed over the Gulf of Mexico in 1985 (Warner and Drake 1988). The observed target was cumulus cloud and the detection was completed within 3-4 min. The statistical comparison between the tomography retrievals and in situ measurements showed excellent agreement. This result indicated that the LWC in cumulus cloud does not change much in a 3-4-min period. Although our observation and seeding targets are generally stratiform cloud systems, in which the LWC changes more slowly than in cumulus clouds, we still do not want the decision and implementation of seeding to take more than twice that time. For example, using the currently feasible MAF two-level scheme, the observa- tion can be completed in 5 min if the width of the target area is 20 km and the cruising speed is 70 m s21,whichis typical for Yun-12 aircraft. However, the inversion of LWC in the Z2013 scheme may cost more than 6 min. Besides, time is also needed for the seeding aircraft to locate the correct place and implement the seeding. Considering all the above-mentioned factors, the hope is that the time it takes for inversion can be reduced as much as possible. Obviously, the inverse algorithm used in Z2013 must be improved to meet such needs.

One of the factors that slows down convergence of the inverse algorithm is the so-called smoothing property of the iterative method it uses. This reflects the fact that the iterative method is very effective at eliminating the os- cillatory components of error within the first 10-20 iter- ations while leaving the smooth components relatively unchanged. The multigrid technique was initially devel- oped to overcome such a dilemma in the differential equation by allowing smooth modes to converge faster than oscillatory ones (Briggs et al. 2000). It was first ap- plied in three-dimensional variational data assimilation (3DVAR) to diagnose the water vapor field by a GPS network (MacDonald et al. 2002). Because of its nu- merical efficiency and simplicity for parallelization, the multigrid technique has been selected to implement a space-time multiscale analysis system (STMAS), first developed by Xie et al. (2005). This multigrid-version STMAS has been applied widely in the data assimilation field, and shows distinct advantages over the traditional objective analysis and standard statistical analysis. When applying the multigrid technique in 3DVAR or 4DVAR, speeding up the convergence of the iterative algorithm is not the only advantage; the model background error co- variance can be simplified, which is a very difficult and challenging problem in atmospheric data assimilation (Li et al. 2008, 2010; Xie et al. 2011).

Warner and Drake (1985) used a strategy similar to a multigrid scheme for cloud tomography. They started with four grids, and doubled the number of grids after each iteration. However, their research did not focus on the multigrid technique, and thus it left many associated problems unaddressed. First, the original nonlinear in- verse problem was turned into a series of linear problems before the application of the multigrid technique. As mentioned before, such linearization will make the algo- rithm take a much longer time to converge, or even fail to converge if the LWC is relatively high. In fact, the multi- grid technique manages to solve the nonlinear problem powerfully. Therefore, in this paper, it is directly applied to this nonlinear optimization problem. Second, the multigrid technique was applied without careful consid- eration of the limitations of cloud tomography, such as the insufficient observations in the vertical direction. Actually, there is a variety of forms of multigrid schemes (Briggs et al. 2000). In this paper, we examine the math- ematical nature of the cloud tomography problem and develop an appropriate multigrid technique that can be applied to it. Third, their research did not investigate the effectiveness of the multigrid scheme or reveal its essen- tial cause that accelerates the convergence. This aspect is discussed in detail in the present work.

In this study, a multigrid method is applied to cloud tomography to improve the convergence rate to ensure its real-time application in a much wider range of fields. The remainder of the paper is organized as follows. In section 2 we describe the development of the multigrid scheme, specifically designed to be appropriate for cloud tomography. In section 3 we investigate the effective- ness of the newly developed algorithm via numerical simulations. Finally, conclusions and a discussion of further improvements are presented in section 4.

2. Specific multigrid scheme for cloud tomography a. Brief review of the inverse algorithm in Z2013 The multigrid scheme for cloud tomography is based on the inverse algorithm in Z2013. Therefore, before dis- cussing the new scheme, we provide a brief review of the previous one. Tikhonov regularization is used to build the objective function: (ProQuest: ... denotes formula omitted.) (1) The first term is the residual term, where x is the LWC vector, F(x) is the radiative transfer operator, and b is the observation data vector. The second term is the regular term, where a is the regularization parameter and L is the 2D Laplace operator, which acts as a smoothing con- strainttoLWC.Thechoiceofa is critical, as its value determines the effect of observations and the smoothing constraint on the retrieval result. Overestimation of its value will smooth the detailed information, while un- derestimation of its value will make the retrieval result oscillatory. The L-curve method is used to determine the value of a under the following considerations. First, it is impossible to identify a universal empirical value of a via numerical simulations. Because the regular term is the smoothing constraint, whose reliability depends on the cloud LWC distribution, if the LWC distribution is smooth, then the weight of the regular term is large; otherwise, the weight is small. Second, the large variation range of LWC distribution in nature makes it impossible to determine the value of a in advance. Besides, poste- riori strategies, such as the Morozov deviation principle, also cannot work since the error level-including the at- mospheric variability, sounding error, and microwave radiometer noise-is difficult to estimate accurately. Numerical simulations show that an inaccurate estimate of the error level will lead to large errors in the retrieval. In the L-curve method, a log-log plot of the residual term and the regular term for all possible values of a form an L-shape line. The sharp corner of this line represents the optimum a for the inverse problem (Wang 2007). To achieve this, objective functions with several initial guesses of a have to be optimized sequentially. The limited-memory Broyden-Fltecher-Goldfarb-Shanno for bound constrained problems (L-BFGS-B) algorithm is used to solve this large nonlinear problem (Zhu et al. 1997).

b. Establishment of the specific multigrid scheme As previously discussed, the L-curve method can be very time consuming, since the objective functions with possible initial guesses of a have to be solved. In the multigrid technique, this method is still required for the same reason as that discussed in section 2a. However, the situation becomes even worse because this L-curve procedure has to be repeated once the grid spacing is changed. To reduce the runtime of the inversion, the number of changes of grid spacing should be minimized. Therefore, the half-V cycle scheme in the multigrid technique is adopted in this study. In this scheme, iter- ation begins on the coarsest grid, and then the retrieved results are projected onto the next finer grid as an initial value. This procedure is repeated until the finest grid level is reached. The principle of this scheme can be summarized briefly as follows. The smoothing property makes the iterative algorithm highly effective at elimi- nating the oscillatory components of the error during the early iterations; however, once the oscillatory modes have been removed, the iteration is much less effective in reducing the remaining smooth components. The half-V cycle scheme manages to always keep the itera- tions on oscillatory modes with high efficiency by per- forming the iterations from the coarsest to the finest grid. In fact, whether a signal with a certain wavenumber is smooth or oscillatory depends on the grid resolution. The modes with a wavenumber less than half of the number of grids are smooth modes; otherwise, they are oscillatory modes. Thus, the smooth error modes on a finer grid will appear more oscillatory on a coarser grid. Their elimination will be much more effective on a coarser grid than the original fine grid according to the smoothing property. Hence, only 10-20 iterations are needed to eliminate the oscillatory error modes on each grid level, since the smooth part has already been eliminated on the higher grid levels. Besides, iteration on a coarse grid is less expensive because there are fewer unknowns to be retrieved. That is why performing iter- ations from the coarsest to the finest grid can greatly speed up the convergence rate (Briggs et al. 2000). The effectiveness of this scheme in cloud tomography is in- vestigated by Fourier analysis in section 3b. The half-V cycle scheme has been applied in the STMAS introduced in section 1, and it has proved to be very effective. Hereafter, we refer to the half-V cycle scheme as HV and the previous algorithm in Z2013, which works without HV, as NHV.

It is noted that the L-curve method compares the re- sidual and regular terms without consideration of scale dependency. Earlier research on the nonlinear multigrid technique showed that, as the algorithm processing ad- vances, the solution of the coarse grid begins to achieve accuracy comparable to that of the solution on the finest grid, albeit at the coarse resolution (Briggs et al. 2000). In addition, the observation vector is unchanged throughout all the grid levels in HV, so the iterations on each level will make the solution close to the simulated truth on the final grid level. From this point of view, we decide it is acceptable to determine the value of the regularization parameter through comparing the residual and regular terms. The necessity and validity of the L-curve in HV is checked via numerical simulations in section 3a.

The HV algorithm designed specifically for cloud to- mography is shown in Fig. 1, and its detailed parameters are determined as follows. Six-level grids are set up over a 2D 20-km-wide 3 4-km-high vertical slice, which is also thecaseinZ2013. The ratio for intergrid transfer is set to be 0.5 [i.e., the grid spacing on the ith grid level is half that on grid level (i 2 1)]. This ratio is practiced almost uni- versally, because there is usually no advantage to using grid spacings with ratios other than 0.5 (Briggs et al. 2000). Although the MAF scheme is applied to increase the information in the vertical direction, the observations in this direction are still insufficient. Accordingly, it is improper to set a high vertical resolution. We therefore decided to stop subdividing the vertical grids after the fourth grid level. This means that the number of vertical and horizontal grids on six levels are 1-1, 2-2, 4-4, 8-8, 8- 16, and 8-32, respectively, over the same slice. For a similar reason, the smoothing constraint is only needed in the vertical direction. In HV, a 1D Laplacian operator is used instead of the 2D operator in Z2013,whichis discussed via numerical simulation results in section 3b. The optimum value of a for the objective function built on every grid level has to be determined by the L-curve method. The possible guesses for a are set as 10 different values from 1.0 3 106 to 1.0 3 1023, with a ratio of 0.1. As mentioned earlier, the oscillatory error modes on each grid level can be mainly eliminated during the first 10-20 iterations. However, there is still a risk that the smooth components of error may exist once the scheme reaches the finest grid (Briggs et al. 2000). To prevent this, the number of iterations on each grid level should not be too small, although this will cost more time. The iteration number has in other work taken various values depending on the problem, such as 12 in He et al. (2008) and 50 in Li et al. (2010). Here, for cloud tomography, the number is set to 20 based on the result of a sensitivity experiment (not shown). The operator that projects the coarse-grid vectors into fine-grid vectors is denoted by I2ss ,wheres is the grid spacing. Here, injection is chosen to be the I2ss , which is demonstrated in Fig. 2.

3. Numerical simulations In this section, the effectiveness of the HV algorithm for cloud tomography is investigated via numerical sim- ulations. Since the observing system is the same as that in Z2013, we only give a brief introduction of it here. The central frequency of the double-antenna microwave ra- diometer is 31.65 GHz, and the elevation angle of each antenna is 308 and 908. The beamwidth of the radiometer is 4.28, the integration time is 0.3 s, the antenna switching time is 0.2 s, and the standard deviation of the measure- ment error is 0.2 K. The surrounding environment is simulated based on the radiosonde profile obtained at Changchun station in China at 0000 UTC 8 July 2010. Atmospheric variability is simulated by drawing tem- perature and humidity from a Gaussian distribution with means equal to the Changchun sounding profile and standard deviations equal to those proposed by Drake and Warner (1988). For the inversion calculations, tem- perature and humidity are assumed to be horizontally homogeneous, and are equal to the Changchun sounding profile values. Sounding errors are 65% for the water vapor mixing ratio and 61.0 K for temperature (Huang et al. 2008). The simulated liquid water fields include weak onion cloud, broken cumulus cloud, stratocumulus cloud, and horizontally homogeneous cloud. The onion cloud as defined by Warner and Drake (1985) consists of a central core of relatively high LWC surrounded by uniform rings of decreasing LWC toward the cloud edge. These cases are similar to those studied in Z2013,except thatthefinalgridlevelissetto8332rather than103 20. Since the grid ratio is 0.5, the number of vertical and horizontal grids must be a power of 2. The initial guess for the retrieval is always set to zero in this study. As men- tioned in section 2b, iterations on each grid level will make the solution close to the simulated truth on the final grid level. Therefore, when estimating the retrieval error of a solution on a certain grid level, we first project the solution to the final grid level and then compare it with the simulated truth. The definition of retrieval error is the root-mean-square error (RMSE) divided by the maxi- mum LWC in the simulated true field.

a. The necessity and validity of the L-curve method in the HV algorithm In the HV algorithm, the optimum values of a for each grid level are determined by the L-curve method. This can be very time consuming. Therefore, we assess the neces- sity and validity of such a strategy and see if the procedure can be simplified in certain circumstances. The sensitivity study described in this and the following sections is car- ried out based on broken cumulus cloud and the two-level flight scheme, since this cloud type contains more scales of information compared with the other three cloud types (not shown), and the two-level flight scheme is regarded as the most feasible option in field experiments at present (Z2013).

Figures 3a and 3b show that, on grid levels 1 and 2, the retrieval error first reduces quickly as a decreases and then remains constant after a , 1.0 3 104. There seems no better choice of a other than 0. The situation is dif- ferent on grid levels 3-5 (Figs. 3c-e). The retrieval error first decreases and then increases as a descends. This indicates an optimum a on these grid levels. The solid dots in these figures represent the retrieval under the constraint with the optimum a determined by the L-curve method. These solid dots essentially coincide with the open-circle dots corresponding to the minimum retrieval error. This demonstrates that the L-curve method is necessary and valid to find the optimum value of a for retrieval on these grid levels. The situation on the last grid level looks like that on the first two grid levels, but in fact the retrieval accuracy under the smoothing constraint with the a determined by the L-curve method is im- proved by 0.1% compared with the retrieval accuracy without any constraint. This improvement can be higher when the number of flight levels decreases. Therefore, the regular term and L-curve method are still needed on this finest grid level. The above-mentioned phenomenon also appears in the retrieval for other cloud types at other flight level numbers. Therefore, we can conclude that if the number of grids in the vertical direction is less than the number of flight levels on a particular grid level, the observations are adequate to retrieve the wavelengths represented by the grids perfectly without any smoothing constraint. In this case, the regular term in the objective function can be omitted and the L-curve method is no longer needed, thus saving a lot of time. Here, only ob- servations in the vertical direction are considered because the information in the horizontal direction is always suf- ficiently provided.

Figure 3 also shows that the optimum value of a de- creases from the coarsest grid level to the finest grid level. This is because in HV the iterations are performed from the coarsest grid to the finest grid, which force the re- trieval to be carried out from long waves to short waves. Long waves are smooth, which is more consistent with the property of the regular term, and thus the weight of this term is relatively higher. Conversely, short waves are oscillatory, which is not consistent with the constraint term, and thus the weight decreases. It should be noted that in the HV algorithm, optimum a is determined for each certain wavelength; while in the NHV algorithm, only one optimum a is set for all the wavelengths. Ob- viously, the former treatment is more reasonable than the latter in numerical calculations.

To summarize, the regular term is only needed when the number of vertical grids exceeds the number of flight levels. When this happens, the L-curve method can find the optimum a effectively. The optimum value of a decreases from the coarsest to the finest grid level . This indirectly proves that the convergence is from long waves to short waves in HV.

b. The convergence modes of the NHV and HV algorithms In this part, we draw a comparison between the con- vergence modes of HV and NHV and disclose the nature of the multigrid technique that makes the retrieval much more efficient. For simplicity of comparison, the L-curve method is omitted and the optimum a isassumedtobe known in advance. Figure 4 shows the retrieval error as a function of iteration number for NHV and HV. The retrieval errors of the first 105 iterations are plotted, since it is the total number of iterations for HV. The error of the initial guess is also plotted at iteration 0. As can be seen from the figure, the retrieval error of NHV reduces very quickly in the first 10 iterations, but after that the reduction slows down obviously. For HV, meanwhile, the retrieval error reduces more slowly than NHV at first, but contin- uously. After 86 iterations, the retrieval error of HV be- comes smaller than that of NHV. Such different features of reduction in the retrieval error indicate totally different convergence modes in these two algorithms.

To examine this problem, 2D discrete Fourier trans- form (DFT) is applied to the analysis of the retrieval er- rors during the iteration for both algorithms. The retrieval error is calculated by subtracting the simulated truth from the retrieved value that projected to the final grid level. DFT is calculated by the fast Fourier transform (FFT) algorithm. Frequency is expressed as a fraction of the sampling rate. This means that the frequency values al- ways run between 0 and 0.5, since discrete data can only contain frequencies between direct current (DC) and one-half the sampling rate (Steven 1997, 141-242). To compare the convergence rate equally, the intermediate solutions of the two algorithms at the same number of iterations are analyzed (Fig. 6). In detail, the number of iterations of HV on gridlevels1-6 are7, 18, 20, 20, 20, and 20, respectively, meaning the intermediate solutions of NHV are taken at the 7th, 25th, 45th, 65th, 85th, and 105th iterations. The Fourier analysis of the error of the initial guess is also shown in Fig. 5 for comparison.

Let us first focus on the right-hand column of Fig. 6. This Fourier analysis of NHV shows that both long-wave and short-wave error are eliminated quickly by the first seven iterations (Figs. 5 and 6g). The short-wave error is then eliminated ahead of the long-wave error from the 7th to the 45th iteration (Figs. 6g-i). Thereafter, the errors with frequencies of 0.25 and 0.125 in the vertical direction are barely improved by iteration (Figs. 6j-l). This exhibits a very typical smoothing property of the iterative algo- rithm.Letusnextturntotheleft-handcolumnofFig. 6. This Fourier analysis of the HV algorithm shows that the error is eliminated gradually from long waves to short waves, which is quite different from the convergence mode of NHV. If we carefully compare the solutions in Figs. 6f and 6l, we find that the explicit frequency com- ponents of error are 0.25 and 0.125 in NHV, while it is only 0.25 in HV after the same number of iterations. In fact, the NHV algorithm needs to run another 481 it- erations to eliminate the long-wave error as well as the HV can in only 105 iterations. This comparison indicates that long waves can be effectively retrieved on a coarser grid. The smoothing property problem in the iterative algorithm is considerably alleviated by HV.

The final solution of HV and the intermediate solution of NHV at the same number of iterations (Figs. 6f and 6l) show that the error in the horizontal direction is mainly eliminated, while the error in the vertical direction still exists explicitly. This indicates that the observation in the vertical direction is insufficient to reconstruct the vertical structure of the LWC on the grids we set. That is why the smoothing constraint is only applied in the ver- tical direction. It seems that more flight levels, more ef- ficient physical constraint, and additional information from other observing instruments are still required to improve the inversion in the vertical direction.

Figure 6 only shows the iterative process of HV and NHV when a is taken at its optimum value. In this situa- tion, the regular term can help to filter out the unresolvable scales of information and thus promote the convergence. If a is taken at less than the optimum value, which happens in the L-curve method, where several possible values of a have to be tested, then NHV will waste a lot of time dwelling on some small scales that cannot be resolved by observations. For example, the number of iterations in- creases from 586 to 6017 when a decreases from the op- timum value of 10 to 0.001. The situation is much better in HV, since only 20 iterations are needed to retrieve the scales on each grid level. Therefore, HV does not waste much time on the unresolvable scales; it just retrieves the resolvable scales of information on the corresponding grid level and leaves the unresolvable scales as errors on the fine-grid level.

In conclusion, Fourier analysis reveals the completely different convergence modes of these two algorithms. The slow convergence problem in NHV caused by the smoothing property can be effectively alleviated in HV by performing iterations from the coarsest to the finest grid. In this way, the resolvable scales of information contained in observations can be retrieved efficiently on the coarser grid level and the unresolvable scales are left as errors on the finer grid level.

c. Comparison of the retrieval efficiency between the HV and NHV algorithms In this section, the NHV and HV algorithms are ap- plied in the cases including all four types of cloud with the number of flight levels from 1 to 6 in Z2013. The retrieval accuracies and total runtimes are compared between the two algorithms. All computations are performed on an IBM ThinkCenter with a 3.4-GHz CPU, 4-GB memory, and Windows 7 operating system.

As shown in Fig. 7a, the retrieval error of HV is quite close to that of NHV. The minor differences range from 20.3% to 0.6% and are distributed randomly among all the cases. However, the runtime of the HV algorithm drops considerably compared with NHV (Fig. 7b). The average time for the NHV and HV algorithms to complete the inversion of all the cases is 319 and 18 s, respectively. By using the newly developed algorithm, the runtime can be reduced by 89%-96.9%. As for the two-level flight scheme, which is a currently feasible multilevel flight scheme, the inversion can be completed within 13 s by HV, whichismuchlessthanthe407sbyNHV.

The variation in runtime among cloud types and number of flight levels is quite different between the two algorithms. The runtime of HV increases monotonically as the number of flight levels increases, and is quite similar among the different cloud types. This is because the number of iterations on every grid level is fixed. As the number of flight levels increases, the complexity of the objective function increases accordingly, leading to a longer runtime for each iteration. Besides, an increase in the number of flight levels also leads to the omission of the regular term on some grid levels, resulting in a run- time saving. However, the overall result is a monotonic increment in total runtime. Compared with HV, the variation in runtime for NHV seems quite irregular and oscillatory, since the number of iterations is entirely determined by the specific situation in every case.

This experiment indicates that the HV algorithm can significantly reduce the runtime of cloud tomography while keeping a similar retrieval accuracy to that of the NHV algorithm. This improvement makes it possible for cloud tomography to serve in some real-time applica- tions in the near future, such as short-term forecasting and weather modification.

4. Summary and conclusions In this study, a fast inverse algorithm (HV) based on the half-V cycle scheme of the multigrid technique has been developed for cloud tomography, so as to ensure its real-time application in a wider range of fields in the near future. In HV, the objective function built on the coarsest grid is optimized and then the solution is pro- jected to the finer grid as an initial value. This procedure is repeated until the finest grid level is reached. The effectiveness of the HV algorithm and the reasons for accelerated convergence have been investigated via numerical simulations.

Fourier analysis shows that the slow convergence prob- lem caused by the smoothing property of the algorithm previously developed in Z2013 (NHV) can be effectively alleviated in HV by performing iterations from the coars- est to the finest grid. In this way, the resolvable long scales of information contained in observations can be retrieved efficiently on the coarser grid level and the unresolvable short scales are left as errors on the finer grid level. The comparison of the two algorithms shows that the retrieval accuracy of HV is quite close to that of the NHV algorithm in all cases. However, the runtime can be significantly reduced by 89%-96.9%. As for the currently feasible two- level flight scheme for a 20-km-wide target area, conver- gence can be accelerated from 407 s in NHV to 13 s in HV. This reduction in time would be multiplied several times if the target area is much wider, and segmental retrieval would be required to avoid exceeding the time limit of cloud tomography.

Some possible improvements are suggested as follows. First, more vertical information is still needed to retrieve the short waves in that direction. This information can be obtained by additional flight levels, more efficient phys- ical constraint, and/or other observing instruments. Sec- ond, the L-curve method can be omitted if observations from radar replace the Laplace operator as a much more efficient constraint, because the observation error co- variance matrix of radar can be statistically estimated in advance. Thus, the runtime can be reduced greatly. Third, if the time-consuming process for determining the regularization parameter can be omitted based on the above-mentioned assumption, then it will be free for cloud tomography to choose any multigrid scheme. More numerical simulations should be performed to find a better scheme for cloud tomography. Last, the multigrid technique is suitable for use by parallel programs, which would make cloud tomography an even more efficient tool.

Acknowledgments. The authors thank Professor Yuanfu Xie from NOAA for the instructive and helpful discussions on multigrid theory. The authors also appreciate the com- ments and suggestions from the anonymous reviewers, which greatly contributed to improving the original manu- script. This research was supported by the National Natural Science Foundation of China (Grant 41105016), the 973 Program of China (Grant 2013CB430105), the Knowledge Innovation Program of the Chinese Academy of Sciences (Grant KZCX2-EW-203), the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant XDA05100300), and the Central Public-Interest Scientific Institution Basal Research Foundation of China (Grant IUMKY201313PP0403).

REFERENCES Briggs, W. L., V. E. Henson, and S.F.McCormick, 2000: A Multigrid Tutorial. 2nd ed. Society for Industrial and Applied Mathe- matics, 193 pp.

Drake,J. F., andJ. Warner, 1988: A theoretical studyofthe accuracy of tomographic retrieval of cloud liquid with an airborne radi- ometer. J. Atmos. Oceanic Technol., 5, 844-857, doi:10.1175/ 1520-0426(1988)005,0844:ATSOTA.2.0.CO;2.

He,Z. J.,Y.F. Xie,W. Li, D. Li, G. J. Han, K. X. Liu, andJ.R.Ma, 2008: Application of the sequential three-dimensional varia- tional method to assimilating SST in a global ocean model. J. Atmos. Oceanic Technol., 25, 1018-1033, doi:10.1175/ 2007JTECHO540.1.

Huang, D., Y. Liu, and W. Wiscombe, 2008: Determination of cloud liquid water distribution using 3D cloud tomography. J. Geophys. Res., 113, D13201, doi:10.1029/2007JD009133.

Li,W., Y. F. Xie, Z. J. He, G. J.Han, K. X. Liu,J.R. Ma, and D. Li, 2008: Application of the multigrid data assimilation method to the China Seas' temperature forecast. J. Atmos. Oceanic Technol., 25, 2106-2116, doi:10.1175/2008JTECHO510.1.

_____, _____, S. M. Deng, and Q. Wang, 2010: Application of the multigrid method to the two-dimensional Doppler radar radial velocity data assimilation. J. Atmos. Oceanic Technol., 27, 319-332, doi:10.1175/2009JTECHA1271.1.

MacDonald, A. E., Y. F. Xie, and R. H. Ware, 2002: Diagnosis of three-dimensional water vapor using a GPS network. Mon. Wea. Rev., 130, 386-397, doi:10.1175/1520-0493(2002)130,0386: DOTDWV.2.0.CO;2.

Steven, W. S., 1997: The Scientist and Engineer's Guide to Digital Signal Processing. California Technical Publishing, 626 pp.

Wang, Y. F., 2007: Computational Methods for Inverse Problems and Their Applications (inChinese). Higher EducationPress,83-84.

Warner, J., and J. F. Drake, 1985: Determination of cloud liquid water distribution by radiometric data. J. Atmos. Oceanic Technol., 2, 293-303, doi:10.1175/1520-0426(1985)002,0293: DOCLWD.2.0.CO;2.

_____, and _____, 1988: Field tests of an airborne remote sensing technique for measuring the distribution of liquid water in convective cloud. J. Atmos. Oceanic Technol., 5, 833-843, doi:10.1175/1520-0426(1988)005,0833:FTOAAR.2.0.CO;2.

Xie, Y. F., S. E. Koch, J. A. McGinley, S. Albers, and N. Wang, 2005: A sequential variational analysis approach for meso- scale data assimilation. 21st Conf. on Weather Analysis and Forecasting/17th Conf. on Numerical Weather Prediction, Washington, D.C., Amer. Meteor. Soc., 15B.7. [Avail- able online at https://ams.confex.com/ams/WAFNWP34BC/ techprogram/paper_93468.htm.] _____, _____, _____, _____, P. E. Bieringer, M. Wolfson, and M. Chan, 2011: A space-time multiscale analysis system: A sequential variational analysis approach. Mon. Wea. Rev., 139, 1224- 1240, doi:10.1175/2010MWR3338.1.

Young, K. C., 1996: Weather modification-A theoretician's view- point. Bull. Amer. Meteor. Soc., 77, 2701-2710, doi:10.1175/ 1520-0477(1996)077,2701:WMATV.2.0.CO;2.

Zhou, J., H. C. Lei, and L. Ji, 2011: Comparison of non-linear op- timum and linear iteration algorithms applied in the microwave tomograph (in Chinese). Plateau Meteor., 30, 760-771.

_____, _____, and _____, 2013: Improvement of liquid water content retrieval accuracy by multilevel detection in cloud tomogra- phy. J. Atmos. Oceanic Technol., 30, 301-312, doi:10.1175/ JTECH-D-12-00054.1.

Zhu, C. Y., R. H. Byrd, P. Lu, and J. Nocedal, 1997: L-BFGS-B: Fortran subroutines for large-scale bound-constrained optimization. ACM Trans. Math. Software, 23, 550-560, doi:10.1145/279232.279236.

JUN ZHOU AND HENGCHI LEI Key Laboratory of Cloud-Precipitation Physics and Severe Storms, Institute of Atmospheric Physics, Chinese Academy of Sciences, Beijing, China LEI JI Key Laboratory of Cloud-Precipitation Physics and Severe Storms, Institute of Atmospheric Physics, Chinese Academy of Sciences, and Institute of Urban Meteorology, China Meteorological Administration, and Beijing Weather Modification Office, Beijing, China TUANJIE HOU Key Laboratory of Cloud-Precipitation Physics and Severe Storms, Institute of Atmospheric Physics, Chinese Academy of Sciences, Beijing, China (Manuscript received 5 September 2013, in final form 2 April 2014) Corresponding author address: Dr. Jun Zhou, Key Laboratory of Cloud-Precipitation Physics and Severe Storms, Institute of Atmo- spheric Physics, Chinese Academy of Sciences, Deshengmenwai, Qijiahuozi, Chaoyang District, Beijing 100029, China.

E-mail: [email protected] DOI: 10.1175/JTECH-D-13-00184.1 (c) 2014 American Meteorological Society

[ Back To TMCnet.com's Homepage ]