Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Rigorous multi-slice wave optical simulation of x-ray propagation in inhomogeneous space

Open Access Open Access

Abstract

The simulation of the propagation of divergent beams using Fourier-based angular spectrum techniques can pose challenges for ensuring correct sampling in the spatial and reciprocal domains. This challenge can be compounded by the presence of diffracting objects, as is often the case. Here, I give details of a method for robustly simulating the propagation of beams with divergent wavefronts in a coordinate system where the wavefronts become planar. I also show how diffracting objects can be simulated, while guaranteeing that correct sampling is maintained. These two advances allow for numerically efficient and accurate simulations of divergent beams propagating through diffracting structures using the multi-slice approximation. The sampling requirements and numerical implementation are discussed in detail, and I have made the computer code freely available.

© 2019 Optical Society of America

1. INTRODUCTION

The so-called multi-slice (MS) approximation has been widely used to perform wave optical simulations of beams propagating through diffracting structures too large for the projection approximation to be valid (e.g., [16]) and is thus well established. The MS approximation instead divides the diffracting structure into a partition of slices to which the projection approximation may be applied individually. Algorithms employing the MS approximation generally use the theory of Fourier optics [7] to propagate beams across a slice between planes. Numerical implementations of such algorithms use the discrete Fourier transform (DFT), which makes such algorithms computationally tractable. Careful attention must be paid to sampling in DFT-based beam propagation techniques, which can become prohibitive when modeling divergent beams, since correct sampling must be maintained in both the spatial and reciprocal spaces throughout the entire simulated volume. This challenge can be exacerbated when modeling diffracting objects that may not be properly represented on a sampled grid. Despite the large number of published studies that employ the MS approximation, I am unaware of any that have analyzed, or proposed a solution to, the challenge posed by sampling divergent beams in a MS simulation containing diffracting objects. A method of transforming a divergent-beam geometry into a plane-wave geometry for single-slice simulation has been demonstrated [6,8,9]. In this paper, I provide a full description of how to integrate this transformation into a MS simulation in order to calculate how a coherent divergent beam propagates through a diffracting sample extended in the axial direction. Furthermore, I provide a thorough analysis of the sampling requirements of this new approach and show how it can be implemented in such a way that guarantees that correct sampling is maintained throughout the simulation.

I begin this paper with an overview of preliminary theory required to develop the simulation technique, including angular spectrum approaches to propagating beams within a system described by the paraxial wave equation. The sampling requirements of this technique, in both the spatial and reciprocal spaces, are reviewed before describing how diffracting objects are treated. I then introduce the divergent-wave-to-plane-wave transformation, which allows for significant relaxation of the sampling requirements. It is then shown how this transformation can be applied in the MS approximation, and I provide details of its numerical implementation and sampling requirements. I conclude the paper with a series of examples that provide verification of the new simulation method and illustrate its usefulness.

A. Preliminary Theory

I consider a system as depicted in Fig. 1, where a monochromatic point source is located at the origin of the global coordinate system. Extended sources that are spatially incoherent can be represented by collections of point sources, and polychromatic sources may be modeled by incoherently superimposing simulations performed at wavelengths throughout the source spectrum. Although beyond the scope of this paper, sources with more general states of coherence may be modeled by performing one simulation for each mode of the source’s coherent mode decomposition. I assume that free space exists for 0zΔzso, where I use the subscript so to denote source-to-object distance. I also assume that some degree of refractive index inhomogeneity exists in the region zΔzso and that our ultimate goal is to calculate the field in the plane z=Δzso+Δzod, where Δzod is the object-to-detector distance. I thus refer to the space defined by ΔzsozΔzso+Δzod as the computational region. I describe fields propagating in this system according to the form u(x,y,z)exp(ikziωt), where k is the wavenumber, ω is the angular frequency of the radiation emitted by the source, and it is assumed that u(x,y,z) varies only weakly with z. If u(x,y,z)exp(ikziωt) is substituted into the time-harmonic scalar wave equation, it is not difficult to show that the paraxial wave equation is obtained as

2ux2+2uy2+2ikuz=0.
The angular spectrum, U(a,b,z), associated with a wavefield, u(x,y,z), can be used to propagate such a field over some axial distance in a homogeneous space. It has been shown that u(x,y,z) and U(a,b,z) are related according to [7]
U(a,b,z)=u(x,y,z)exp(i2π(x(a/λ)+y(b/λ)))dxdy,
u(x,y,z)=U(a,b,z)exp(i2π(x(a/λ)+y(b/λ)))·d(aλ)d(bλ).
It is then relatively straightforward to show that a component of an angular spectrum may be propagated a distance Δz in homogeneous space according to [7] U(a,b,z+Δz)=P(a,b,Δz)U(a,b,z), where
P(a,b,Δz)=exp(ik(a2+b2)Δz/2)
is the well-known free-space propagation operator.

 figure: Fig. 1.

Fig. 1. Schematic diagram of the system studied in this paper. A point source is located at the origin of the Cartesian coordinate system. Refractive index inhomogeneities may exist for z>Δzso, one example of which is illustrated in a layer bounded by the planes z=zi and z=zi+1=zi+Δzi.

Download Full Size | PDF

The spherical wave emitted by the point source has complex amplitude in the free-space region of exp(ikr)/r, where r=x2+y2+z2. If I make the Fresnel approximation to this field and drop the exp(ikz) dependence, I obtain the expression

uinc(x,y,z)=exp(ik(x2+y2)/(2z))/z,
which satisfies the paraxial wave equation, Eq. (1).

Our objective in this paper is to employ DFT techniques to determine the complex amplitude that emerges from the plane z=Δzso+Δzod when refractive index inhomogeneities may exist in the region zΔzso. The principal limitation when using DFT-based techniques is that the field must at all times be sampled in a manner that satisfies the Nyquist criterion. When considering spherical waves, impractically dense sampling requirements may result. Since the main aim of this paper is to avoid such dense sampling, I first show how these dense sampling requirements arise from Eq. (5). In particular, if one writes uinc as

uinc(x,y,z)=exp(iϕ(x,y,z))/z,
the local spatial frequencies may be defined as [7]
finc,x=12πxϕ(x,y,z)=xλz,
finc,y=12πyϕ(x,y,z)=yλz.
However, a further constraint on sampling arises after calculating the angular spectrum of uinc using Eq. (2), which gives
Uinc(a,b,z)=iλexp(i(a2+b2)kz/2),
which must also be sampled according to the Nyquist criterion. By following the same procedure as was used to derive Eqs. (7) and (8), I can obtain the following expressions for the local frequency content of the angular spectrum:
finc,a=azλ,
finc,b=bzλ.
In any DFT-based field propagation technique, discrete spatial coordinates and propagation vectors must be defined. I denote these by (x˜,y˜) and (a˜,b˜), respectively. Furthermore, for simplicity, in the remainder of this paper, I will assume that x˜=y˜ and a˜=b˜. Then, if I have N (assumed even without loss of generality) sample points for each such quantity, these discrete coordinates may be defined as
x˜=y˜={(jN/2)Δx|0j<N},
a˜=b˜={λ(jN/2)/(NΔx)|0j<N}.
For this analysis only, I shall neglect the additional complication arising from the implicit periodicity in N underlying the DFT. This discretization must satisfy the Nyquist criterion in both the spatial and reciprocal spaces. In particular, Δx=Δy and Δa=Δb must satisfy
Δx12max(|finc,x|),
Δa12max(|finc,a|),
where max(f) is taken to mean the maximum value of f. Equations (12) and (13) reveal that max(x)=ΔxN/2=X and max(a)=λ/(2Δx)=A, which, with the aid of Eqs. (7) and (10), allows us to write
max(|finc,x|)=ΔxN2λΔzso,
max(|finc,a|)=Δzso+Δzod2Δx.
In order to proceed, it is assumed that N is fixed and that Δx may be varied to achieve correct sampling. Then, by substituting Eqs. (16) and (17) into Eqs. (14) and (15), respectively, and using the relationship Δa=λ/(NΔx), the following inequalities are obtained, which must be satisfied if the Nyquist criterion is to be satisfied in both the spatial and reciprocal spaces:
ΔxλΔzsoN,
Δxλ(Δzso+Δzod)N,
which are unable to be satisfied simultaneously. Equation (18) states that since N is fixed, the transverse width of the simulation must not exceed a particular value if the field incident upon the computational volume is to be sampled correctly in the spatial domain. Equation (19), however, states that in order to correctly sample the angular spectrum in the reciprocal space for fixed N, at some plane beyond the computational volume entrance plane, the transverse width of the simulation must exceed a particular value. The problem arises because these two constraints cannot be satisfied simultaneously.

The final aspect of preliminary theory deals with inhomogeneous refractive index distributions, which I treat using the so-called projection approximation [6,10]. I explain this with the aid of Fig. 1, which contains a position-dependent refractive index distribution, within the region zizzi+1, described by

n(x,y,z)=1δ(x,y,z)+iβ(x,y,z).
Under the projection approximation, it is assumed that if the field exiting the region zizzi+1, in the absence of a refractive inhomogeneity, is given by u(x,y,zi+1), the perturbed field is given as u(x,y,zi+1)exp(iϕ(x,y)), where
ϕ(x,y)=kzizi+1(δ(x,y,z)+iβ(x,y,z))dz,
and k is the free-space wavenumber. In obtaining this result, it is assumed that outside of the support of the refractive index inhomogeneity, δ(x,y,z)=0 and β(x,y,z)=0. Typical implementations of the MS method work by partitioning the volume containing a diffracting structure into slices, as depicted in Fig. 2, such that the (i+1)th slice is defined by zizzi+1. The field exiting the plane z=zi+1 is first calculated assuming the slice zizzi+1 is composed of free space. This is performed by calculating the angular spectrum of the field u(x,y,zi) according to Eq. (2), propagating the angular spectrum to the end of the slice using the free-space propagation operator [Eq. (4)], before evaluating the field at z=zi+1, u(x,y,zi+1), using Eq. (3). Refractive index inhomogeneity in the slice is then accounted for by multiplying u(x,y,zi+1) by exp(iϕ(x,y)), where ϕ(x,y) is evaluated according to Eq. (21). This procedure is then repeated until the field at z=Δzso+Δzod is obtained.

 figure: Fig. 2.

Fig. 2. Diagram illustrating the principal coordinate system notation used in this paper.

Download Full Size | PDF

B. Divergent-Beam-to-Plane-Wave Transformation

The sampling requirements expressed in Eqs. (18) and (19) for modeling divergent spherical waves cannot be achieved simultaneously. However, they can be circumvented by applying a coordinate system transformation that transforms the divergent spherical wave into a plane wave [6,8,9]. Although this subject is treated in detail by Paganin [6], I follow here the notation introduced by Sziklas and Siegman [8,9]. I introduce this transformation by considering the problem depicted in Fig. 1, whereby the complex amplitude incident upon the plane z=zi is known, which I denote by u(x,y,zi). Consider for now that the space zizzi+1 is composed only of homogeneous space, i.e., there is no refractive index inhomogeneity present within the slice. I note that two coordinate systems are depicted in this diagram: the global coordinate system (x,y,z) and a coordinate system (xi,yi,zi), valid only for zizzi+1. In order to evaluate u(x,y,zi+1), I first perform a transformation into a primed coordinate system defined by [8,9]

u(x,y,zi)=exp(ik(x2+y2)/(2zi))v(xi,yi,zi)/zi,
xi(x,z)=xMi,
yi(x,z)=yMi,
zi(z)=zziMi,
where Mi(z)=z/zi. It is clear from Eqs. (22)–(25) that at z=zi, I have xi(x,zi)=x, yi(y,zi)=y, and zi(zi)=0; thus, the transverse coordinates are identical. Furthermore, it is straightforward to show that v(xi,yi,zi) satisfies the paraxial wave equation [Eq. (1)] in the primed coordinate system. This means that techniques such as angular spectrum propagation, developed for calculating the propagation of fields that satisfy the paraxial wave equation, can be applied to v(xi,yi,zi)instead of u(x,y,z). This overcomes the sampling problem expressed by Eqs. (18) and (19), since a divergent spherical wave in the global coordinate system becomes a plane wave in the primed coordinate system.

Having obtained v(xi,yi,zi(zi)=0) from Eq. (22), it remains to calculate v(xi,yi,zi(zi+1)) using angular spectrum propagation. I begin by calculating the angular spectrum of v(xi,yi,0) using Eq. (2), thus giving V(ai,bi,0), which can be propagated a distance zi(zi+1) according to Eq. (4) as

V(ai,bi,zi(zi+1))=P(ai,bi,zi(zi+1))V(ai,bi,0),
where, from Eq. (25), zi(zi+1)=(zi+1zi)/Mi(zi+1) and Mi(zi+1)=zi+1/zi. The complex amplitude in the primed coordinate system can thus be obtained by applying Eq. (3) to V(ai,bi,zi(zi+1)), thus giving v(xi(x,zi+1),yi(y,zi+1),zi(zi+1)). The complex amplitude in the global coordinate system is found by inverting the coordinate system transformation expressed in Eqs. (22)–(25), which will result in
u(x,y,zi+1)=exp(ik(x2+y2)/(2zi+1))v(xi,yi,zi(zi+1))zi+1,
x=(zi+1/zi)xi(x,zi+1)=Mi(zi+1)xi(x,zi+1),
y=(zi+1/zi)yi(y,zi+1)=Mi(zi+1)yi(x,zi+1),
z=zi+1,
which illustrates how returning from the primed (i.e., local) coordinate system to the global coordinate system entails a magnification of the transverse coordinates. Having obtained u(x,y,zi+1), the procedure outlined above may be repeated to propagate the complex amplitude from the plane z=zi+1 to the plane z=zi+2 and so on.

C. Simulation of an Inhomogeneous Refractive Index Distribution

I use the projection approximation discussed in Section 1.A to model refractive index inhomogeneities present within the slice zizzi+1. The projection approximation can be applied in either the global or primed coordinate systems due to the linearity of Eq. (22). I opt to apply the projection approximation directly in the primed coordinate system, i.e., I apply it directly to v(xi,yi,zi(zi+1)), where I have dropped the dependence of xi and yi on (x,z) and (y,z), respectively, for brevity. It is important to note, however, that I must transform the argument of the projection function [Eq. (21)] from the global to the primed coordinate system. In particular, the projection approximation must be applied as v(xi,yi,zi(zi+1))exp(iϕ(Mixi,Miyi)) when operating in the primed coordinate system. While this is evident from the transformation expressed in Eqs. (22)–(25), this may be understood intuitively by noting that the lateral cross section of an object must effectively be de-magnified in the primed coordinate system to compensate for the lack of divergence in the incident wavefront. However, despite this de-magnification, its optical thickness must remain the same in both coordinate systems.

D. Mitigating Aliasing Due to Use of Discrete Fourier Transforms

Even after applying the divergent-to-plane-wave transformation, aliasing, due to the use of DFTs, may still result due to the presence of diffracting refractive index inhomogeneities. In particular, the projection function exp(iϕ(x,y)) will not, in general, be band limited. Thus, if a complex amplitude in the absence of a diffracting object, v(x,y,z), is band limited, the complex amplitude after the application of the projection approximation v(x,y,z)exp(iϕ(Mixi,Miyi)) will not be, in general. I suggest an approach to overcoming this problem by calculating a band-limited projection function. This approach is possible only for diffracting objects for which the Fourier transform of its projection function can be calculated either analytically or numerically to high precision. Supposing I define a projection function t(x,y)=exp(iϕ(x,y)), I define the band-limited version of t(x,y) as

tBL(x˜,y˜)=F^1{W(f˜x,f˜y)T(f˜x,f˜y)},
where tBL is the band-limited version of t, W is a windowing function, T(fx,fy)=F{t(x,y)}, F^ is the DFT operator, and F is the continuous Fourier transform operator. I note here that it is convenient to introduce the spatial frequency parameters (fx,fy)=(a/λ,b/λ). Variables under the tilde sign are assumed to be discretized. A variety of windowing functions could be used; however, in this work, a Tukey window was employed.

It is worth noting that for many applications, this step may be omitted without significantly perturbing the calculation of the transmitted complex amplitude, as many other implementations of the MS method do. This approach is presented here as a means of eliminating aliasing for applications where this is important.

2. CALCULATION OF A FIELD PROPAGATING THROUGH AN ARBITRARY NUMBER OF SLICES

Consider the geometry depicted in Fig. 2, where the space z0zzNs is divided into Ns slices. Propagation from the plane z=z0 to the plane z=zNs can be achieved by performing Ns individual propagation calculations between the planes indicated in Fig. 2. When using the divergent-beam-to-plane-wave transformation, it is important to note that a different transformation is required for each slice. The details of these transformations are indicated in Fig. 2. I denote the primed coordinate system in a particular slice by the subscript associated with the plane closest to the source. For example, the primed coordinate system corresponding to the region zizzi+1 is denoted (xi,yi,zi). Propagation through the first slice (i.e., the slice defined by z0zz1) is achieved by evaluating v(x0,y0,0) according to Eqs. (22)–(25), giving the field in the primed coordinate system at z=z0. Under this transformation, the primed and global coordinate systems coincide at z=z0, i.e., x0(x,z0)=x and y0(y,z0)=y. Propagation to z=z1 would be achieved by multiplying the angular spectrum associated with v(x0,y0,0) (i.e., V(a0,b0,0)) according to

V(a0,b0,Δz0)=P(a0,b0,Δz0)V(a0,b0,0),
where Δz0=(z1z0)/M0(z1). Returning to the global coordinate system at z=z1 entails the change of coordinates
x=M0(z1)x0(x,z1),
y=M0(z1)y0(y,z1).
An important detail emerges when propagating the field through the second slice beginning at z=z1. In particular, following the same strategy as for propagation through the first slice, I apply the transformation into the primed coordinate system (x1,y1,z1), where x1(x,z1)=x and y1(x,z1)=y, meaning that
x1(x,z1)=M0(z1)x0(x,z1),
y1(x,z1)=M0(z1)y0(y,z1).
After propagation through this slice, the transformation becomes
x2(x,z2)=x=M1(z2)M0(z1)x0(x,z1),
y2(x,z2)=y=M1(z2)M0(z1)y0(y,z1).
I can thus write the following general expressions for the coordinate system transformation at the entrance plane z=zi of a slice:
xi(x,zi)=x=x0(x,z1)j=0i1Mj(zj+1),
yi(y,zi)=y=y0(y,z1)j=0i1Mj(zj+1),
and
Mi(zi+1)xi(x,zi+1)=x=x0(x,z1)j=0iMj(zj+1),
Mi(zi+1)yi(y,zi+1)=y=y0(y,z1)j=0iMj(zj+1)
for the exit plane z=zi+1.

A. Outline of Algorithm

All of the elements of the MS algorithm introduced in this paper have now been introduced, and the algorithm can be explained in its entirety. Assuming a partition of slices as illustrated in Fig. 2, the complex amplitude due to a monochromatic point source is evaluated analytically on the sampled grid at z=z0, which is immediately transformed into the primed coordinate system according to Eqs. (22)–(25), yielding v^(x0,y0,z0(z0)). This field is the base case of a recursive definition of the algorithm, which propagates the field in the primed coordinate system to the end of the final slice given as

v^(x^i,y^i,zi(zi+1))=exp(iϕ(Mi(zi+1)x^i,Mi(zi+1)y^i))·F^1{P(a^i,b^i,zi(zi+1))F^{v^(x^i,y^i,zi(zi))}}.
It is also necessary to make the following assignment when transferring from the end of one slice into the beginning of an adjacent slice:
v^(x^i+1Mi(zi+1),y^i+1Mi(zi+1),zi+1(zi+1))=v^(x^i,y^i,zi(zi+1)).
At the end of the final slice located at z=zNs, the field in the primed coordinate system, v^(x^Ns1,y^Ns1,zNs1(zNs)), must be transformed back to the global coordinate system according to Eqs. (22)–(25). It is worth noting that at each iteration of Eq. (43), v^ is simply a matrix that is continually updated by the DFT and multiplication operations contained within Eq. (43). In particular, no additional operations such as resampling are necessary. Instead, the underlying real-space coordinate system vectors are continually being rescaled according to
x^i=x^0j=0i1Mj(zj+1),
y^i=y^0j=0i1Mj(zj+1),
where x^0 and y^0 are established at the beginning of the calculation, and it is understood that x^i and y^i correspond to the beginning of the slice zizzi+1. Furthermore, as is outlined in Section 2.B below, the reciprocal space coordinate system vectors are also constantly being rescaled according to
a^i=a^0/j=0i1Mj(zj+1),
b^i=b^0/j=0i1Mj(zj+1),
where a^0 and b^0 are established at the beginning of the simulation.

B. Numerical Implementation

The simulation methodology outlined in this paper is designed to be implemented using DFTs. I assume that a divergent spherical wave is incident upon the plane z=z0. After applying the coordinate system transformation into the primed coordinate system, the spatially sampled grid must be defined as indicated in Eq. (12). By noting that at z=z0 I have (x0,y0)=(x,y), the sampled grid in the primed coordinate system is defined as

x˜0=y˜0={(jN/2)Δx|0j<N}
and the reciprocal space propagation vectors as
a˜0=b˜0={λ(jN/2)/(NΔx)|0j<N}.
The only non-trivial aspect of this algorithm from a numerical point of view is that each time the field is propagated through a slice, the sampled grid [Eq. (49)] is magnified after transformation back to the global coordinate system. For example, after performing angular spectrum propagation and transforming back to the global coordinate system after the first slice, the sampled complex amplitude u˜ will now be implicitly defined on a global coordinate system sampled grid defined by M0(z1)x˜0 and M0(z1)y˜0, respectively. In general, using Eqs. (41) and (42), the sampled grid in the global coordinate system at exit plane z=zi+1 will be given by x˜0j=0iMj(zj+1) and y˜0j=0iMj(zj+1), respectively. This has two principal consequences for the algorithm, the first of these being that at the exit plane z=zi+1, the sampled reciprocal space propagation vectors are given by a˜0/j=0iMj(zj+1) and b˜0/j=0iMj(zj+1), respectively. The second of these implications is that any band-limited projection functions [see Eq. (31)] must be evaluated on a different spatially sampled grid for every slice. In some cases, this may not entail significant computational cost. However, in the case of diffracting spheres, e.g., where the Fourier transform of the projection function must be evaluated via numerical integration (see Appendix A), direct evaluation of this Fourier transform on a different grid for each slice may be too computationally costly. In this case, however, use can be made of the Fourier transform relationship
t(Mx,My)=F1{T(fx/M,fy/M)}/M2,
meaning that Tsphere(fx2+fy2) must be resampled onto the reciprocal space frequency grid (f˜x/M,f˜y/M), which may be done efficiently by one-dimensional interpolation of a lookup table. As an additional step, prior to performing inverse Fourier transformation to obtain tsphere(Mx˜,My˜), it is convenient to multiply T(fx/M,fy/M) by exp(i2π(f˜xxs+f˜yys)) to allow for modeling spheres located at an arbitrary position (xs,ys,zs), where it is assumed that zs lies within the slice being considered.

C. Sampling Requirements

I shall define the sampling requirements with reference to a cutoff frequency fco, such that the windowing function W(fx,fy), assumed to be separable in fx and fy, takes the value of 0 for |fx|>fco and |fy|>fco. From this point on, I shall assume that the spatial, and therefore reciprocal, computational grids are entirely isotropic. The first requirement that Δx=Δy must satisfy is

Δx12fco.
The next requirement is that propagation of the angular spectrum from z=z0 to z=zNs using Eq. (4) must be correctly sampled in the reciprocal space. Following Matsushima and Shimobaba [11], I find that the sampling period in the reciprocal space, Δfx=Δfy must satisfy
ΔfxzNs2λ(zNsz0)z0fco,
but, since Δfx=1/(NΔx), this can also be written as
N2λ(zNsz0)z0fcoΔxzNs.
The criteria expressed in Eqs. (52) and (54) do not consider the sampling requirements imposed by diffracting objects. In the case of modeling a single diffracting object, I shall assume that fco is chosen such that the band-limited projection function, tBL(x^,y^), has satisfactory agreement with the actual projection function, t(x,y). I do not propose criteria for making this assessment; however, one can assess this by considering the difference between scattered fields predicted by both treatments. In order to avoid aliasing when using the projection approximation, a guard band of width fco should be employed as illustrated in Fig. 3. Whenever a complex amplitude, u^(x^,y^), encounters a diffracting object, t^(x^,y^), imposed via the projection approximation, aliasing will not occur if both u^ and t^ do not contain spectral components within the guard band. The field after diffraction, u^(x^,y^)t^(x^,y^), will in general contain spectral components within the guard band. In order to prevent aliasing at a subsequent diffracting object, the spectral components of the field within the guard band can be filtered out. The energy lost due to such filtering can be monitored to ensure it is kept below a tolerable level, dependent upon the application. In practice however, it has been found in the examples considered in this paper that the amount of energy entering the guard band is negligible, meaning that artifacts arising from aliasing are also negligible.

 figure: Fig. 3.

Fig. 3. Illustration of the region in reciprocal space (the intersection of |fx|<fco and |fy|<fco) where the angular spectrum of the field is permitted to reside, along with a guard band where angular spectrum content is filtered after propagation through each layer.

Download Full Size | PDF

D. Non-Uniform Background Refractive Indices

Up until this point, for clarity, it has been assumed that all scatterers are contained within free space. It is, however, relatively simple to allow each slice defined by zizzi+1 to have a background refractive index ni=1δi+iβi. If the slice contains a refractive index homogeneity as in Eq. (20), this should be redefined as

n(x,y,z)=1(δ(x,y,z)δi)+i(β(x,y,z)βi),
recalling that this applies to (x,y,z) inside the inhomogeneity only. The free-space propagation operator [Eq. (4)] must be redefined as
P(a,b,Δz)=exp(ik1δi+iβi(a2+b2)Δz/2).
An additional final step must then be applied where the total field exiting the slice must be multiplied by
Ti=exp(ik(zi+1zi)(δi+iβi)),
which is necessary because the field u(x,y,z) is considered within the paraxial approximation, meaning that a term exp(ikz) is factored out of the field. It is useful to note that the sampled reciprocal space parameters (a^i,b^i) do not require rescaling due to the redefinition of the free-space propagation operator in Eq. (56).

3. EXAMPLES AND ANALYSIS

Code for performing each of the following examples may be freely downloaded [12].

A. Diffracting Aperture

I consider first the simple example of a square diffracting aperture and a monochromatic point source as depicted in Fig. 4, which shows a point source placed 1.6 m from a square aperture of width W=20μm. The field is observed on the observation plane, which is located a further 0.3 m from the aperture. The attenuating part of the aperture is assumed to be perfectly attenuating, while the transmitting region of the aperture is assumed to be perfectly transmitting. The point source is assumed to be monochromatic with photon energy of 20 keV.

 figure: Fig. 4.

Fig. 4. Schematic diagram of the diffracting aperture and point source upon which all examples in Section 3 are based. Values of W=20μm, za=1.6m, and zo=1.9m were chosen. Note that the diagram is not drawn to scale.

Download Full Size | PDF

The objective of this test is to demonstrate how the choice of N, the number of sample points along each Cartesian direction, impacts upon the accuracy of the simulation method. In particular, for a given choice of N, it remains only to choose either Δx (the isotropic spatial sampling period), or equivalently, X (half the total spatial width of the simulation; see Section 1.A). I choose the value of Δx to ensure that, within the primed coordinate system, a spherical wave that has its source in the plane z=za is correctly sampled after propagating distance z=(zoza)za/zo [see Eq. (25)], resulting in the requirement given by Eq. (14):

Δxλ(zoza)zaNzo,
where Δx was chosen to have a value of 0.95 of its maximum allowable value. Note that Eq. (58) represents the worst-case scenario in which a diffracting object located at z=za leads to the creation of a spherical wave within the primed coordinate system. The cutoff frequency fco is thus dictated by Eq. (52). Increasing the value of N allows for fco to be increased, which reduces the difference between the band-limited and true projection functions of the aperture, and later spheres, as explained in Section 1.D.

The results of this simulation are shown in Figs. 5 and 6. Figure 5 shows the magnitude of the diffracted field at z=zo directly evaluated using Fresnel–Kirchhoff diffraction theory as per Eq. (1) of Ref. [13], without using the primed coordinate system. The field magnitudes as evaluated using the primed coordinate system angular spectrum model for N=1024,3072,5120 and 7168 appear very similar to that plotted in Fig. 5. As a result, the magnitude of the differences between the complex amplitudes evaluated using the angular spectrum approach and those evaluated directly are plotted in Fig. 6. These plots show that at lower values of N, the diffracted field at z=zo is spatially truncated as a result of increased filtering of the aperture’s projection function. These plots show clearly that increasing N increases the spatial extent over which the calculated diffracted field remains accurate. To further probe the effect that N has upon accuracy, I introduce an error metric, defined as

ϵ=i,j|u(iΔx,jΔy)uref(iΔx,jΔy)|2i,j|uref(iΔx,jΔy)|2,
to quantify the error between complex amplitude u(iΔx,jΔy) and a reference complex amplitude uref(iΔx,jΔy). This error metric is plotted in Fig. 7, where the field calculated using Fresnel–Kirchhoff diffraction theory is used as the reference field. This plot shows that the error in the field calculated using the primed coordinate system angular spectrum model reduces approximately linearly with the log of N.

 figure: Fig. 5.

Fig. 5. Magnitude of the field diffracted by the aperture found by direct evaluation of the Fresnel–Kirchhoff diffraction integral.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Plots of the magnitude of the error in each calculated complex amplitude relative to that calculated using Fresnel–Kirchhoff diffraction theory for a diffracting aperture.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Plot of the error ϵ as a function of N, where the reference field is that calculated using Fresnel–Kirchhoff diffraction theory.

Download Full Size | PDF

B. Diffracting Aperture and Single Sphere

In this example, I introduce a single sphere of radius 5 μm with δ=2×106, located a distance 0.1 m downstream of the aperture and situated at transverse position (5,0) μm relative to the center of the aperture. Following the same method of presentation as in the previous example, the diffracted field found by direct evaluation of the Fresnel–Kirchhoff diffraction integral is plotted in Fig. 8. For the case of a single sphere, a two-dimensional extension of Eq. (8) in Ref. [13] was used to find the directly evaluated field. The difference between the primed coordinate system angular spectrum model and the directly evaluated field is also plotted in Fig. 9, which is seen to be very similar to the case where a sphere was not present. I do not plot the error metric in this case, as it was seen to be nearly identical to that which arose in the absence of a sphere, as is plotted in Fig. 7. This example serves the purpose of demonstrating that the primed coordinate system angular spectrum model converges to the directly evaluated field for a reasonably general example.

 figure: Fig. 8.

Fig. 8. Magnitude of the field diffracted by the aperture and a single sphere found by direct evaluation of the Fresnel–Kirchhoff diffraction integral.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Plots of the magnitude of the error in each calculated complex amplitude relative to that calculated using Fresnel–Kirchhoff diffraction theory for the case of a single sphere and diffracting aperture.

Download Full Size | PDF

C. Diffracting Aperture and Two Spheres

In this example, I include an additional sphere with the same radius and δ value as in the previous example, however, located 0.15 m downstream of the aperture and at transverse location (5,5) μm with respect to the center of the aperture. This example was calculated for the case N=7168 only. The field at the observation plane, calculated using the primed coordinate system angular spectrum model is shown in the left of Fig. 10. For validation purposes, a field was evaluated directly using Fresnel–Kirchhoff diffraction theory. The Fresnel–Kirchhoff field was evaluated by first considering each of the two spheres in isolation. The field resulting from this calculation, as if Born’s first-order approximation holds, is shown in the right hand of Fig. 10, which demonstrates that the first-order approximation is not appropriate in this case. The field can be evaluated correctly by taking into account the field that is scattered by both spheres, as is detailed in Appendix B. Once the multiply scattered field is taken into account, an error of ϵ=1.2×103 is obtained, thus illustrating the accuracy of the primed coordinate system angular spectrum model.

 figure: Fig. 10.

Fig. 10. Magnitude of the field obtained when two spheres are located between the aperture and observation plane (left) and the magnitude of the field obtained by considering the two spheres in isolation (right).

Download Full Size | PDF

D. Diffracting Aperture and an Ensemble of Spheres

This final example illustrates the type of application where this model may be particularly useful. Using the same combination of source and aperture as in the previous example, this example considers an ensemble of non-overlapping spheres. Spheres were arranged within a cuboid with a square transverse cross section of width 100 μm and bounded by the planes located 5 cm and 10 cm, respectively, downstream of the diffracting aperture. A total of 47,746 spheres were arranged within this volume resulting in a sphere density of 5% by volume. Although the computation time could have been reduced by considering slices containing multiple spheres, the simulation was performed with one sphere per slice. This was done for two main reasons: (1) to demonstrate how computationally efficient the simulation is, and (2) to ensure the validity of the projection approximation. Simulations were performed for a single ensemble of spheres, but each simulation considered a different value of δ. In any one simulation, each sphere had the same δ value.

The results of the simulation are illustrated in Fig. 11. Each of the four columns in Fig. 11 corresponds to a different value of δ of the spheres, including the free-space case. The first row of images shows a somewhat zoomed-out view of the field magnitude on the observation plane, while the middle row shows a zoomed-in view. The lower row of images shows a histogram of the proportion of pixels having field magnitude within a certain range. The bar plots are derived from the pixels within the magnified views of the middle row of images. The line plot shows the distribution of field magnitudes that would be expected from a Rayleigh distribution [14] having mean equal to that of the simulated field distributions shown in the middle row of images. This comparison with the Rayleigh distribution is performed purely to demonstrate that for δ=106, the complex amplitude can be considered to be a fully developed speckle pattern.

 figure: Fig. 11.

Fig. 11. Magnitude of the field that results when an ensemble of spheres is located between a square diffracting aperture and the observation plane. Each group of three images corresponds to a different value of δ for the spheres, including the top left group, which represents free space. The middle image in each group is a magnified view of the region denoted by the outline box in the top image. The lowest image shows a histogram of the magnitudes of the pixels within the magnified region along with a Rayleigh distribution with mean equal to that of the magnified region.

Download Full Size | PDF

E. Comparison of Computational Complexity

The primed coordinate system angular spectrum method is several orders of magnitude more computationally efficient than direct evaluations of the Fresnel–Kirchhoff integral for the case of a single sphere. For multiple spheres, direct evaluations of the Fresnel–Kirchhoff integral become unfeasible from a computational point of view. This is illustrated in Fig. 12, which plots the computation time per observation point against the number of observation points. This timing information was obtained for the case of a single sphere and a square diffracting aperture, as discussed in Section 3.B. Simulations were run on a computer containing 512 MB of RAM and two Intel Xeon Gold 6148 Processors each possessing 20 physical cores running at 2.40 GHz. I note, however, that only a small proportion of the RAM was used for either simulation. Both simulation techniques were implemented in MATLAB (R2017b), and all 40 physical cores were used. The Fresnel–Kirchhoff method was parallelized trivially by using a parfor loop to compute the field at different points in the observation plane in parallel. The primed coordinate system angular spectrum method was parallelized using the parallelization intrinsic to MATLAB’s implementation of the fast Fourier transform (i.e., fft2).

 figure: Fig. 12.

Fig. 12. Plots of computation time per sample point in the observation plane for the Fresnel–Kirchhoff and primed coordinate system angular spectrum methods.

Download Full Size | PDF

Both simulation techniques are seen to exhibit a reduction in the computation per sample point as the number of sample points increases as simulation overheads are amortized over an increasing number of points. The computation time per point for the primed coordinate system angular spectrum method is on the order of 5×104 times lower than that of the Fresnel–Kirchhoff method.

The simulations considered in Section 3.D each required approximately the same computation time, irrespective of the value of δ the spheres were assumed to have. Each computation such as are displayed in Fig. 11 took an average of 63×104 seconds to compute. While this may be considered a substantial computation time, such a calculation is intractable using the Fresnel–Kirchhoff formalism. Also, as discussed in Section 3.D, this simulation could have been evaluated significantly faster by considering multiple spheres per slice, thus substantially reducing the number of evaluations of fft2.

4. CONCLUSION

I have shown how the simulation of divergent beams propagating through axially extended diffracting objects can pose sampling challenges when using Fourier-based MS propagation techniques. Two principle solutions to these challenges have been demonstrated. The first of these is to use a divergent-wave-to-plane-wave transformation that significantly reduces the sampling requirement when simulating the propagation of such beams through homogeneous volumes. The second of these is the calculation of band-limited object projection functions, which guarantees that aliasing will not occur when using Fourier theory to propagate a beam that has been perturbed by a diffracting object under the projection approximation. Both of these solutions have been combined into a MS simulation technique. I have provided validation of the technique for an example employing a square aperture only and for the case of a square aperture with one or two spheres, respectively. I have also used the technique to simulate beam propagation through an ensemble of spheres dispersed throughout a macroscopic scale volume. I expect this technique to be useful in the study of modalities such as x-ray dark field imaging.

APPENDIX A: BAND-LIMITED PROJECTION FUNCTION OF A SPHERE

The projection function of a sphere, located at the origin, with refractive index n=1δ+iβ and radius R is given as

tsphere(x,y)={exp(ik2(δ+iβ)R2x2y2)x2+y2R21Otherwise,
and its Fourier transform by
Tsphere(fx,fy)=tsphere(x,y)exp(i2π(xfx+yfy))dxdy=δ(fx,fy)+R2x2R2x2RR(tsphere(x,y)1)exp(i2π(xfx+yfy))dxdy,
where δ(fx,fy) is Dirac’s delta function. I can simplify Eq. (A1) by substituting in polar coordinates for both the spatial and frequency coordinates as
x=ρcosϕ,
y=ρsinϕ,
fx=ξcosϕ,
fy=ξsinϕ,
allowing us to write Eq. (A1) as
Tsphere(ξ)=δ(fx,fy)+02π0R(tsphere(ρ)1)exp(i2πρξcos(ϕϕ))ρdρdϕ=δ(fx,fy)+2π0R(tsphere(ρ)1)ρJ0(ρξ)dρ,
where J0 is the zero-order Bessel function of the first kind. As discussed in Section 1.D, the band-limited version of tsphere(x,y) is then obtained as
tsphere,BL(x˜,y˜)=F^1{W(f˜x,f˜y)Tsphere(f˜x2+f˜y2)},
recalling that (x˜,y˜) and (f˜x,f˜y) are the discretized spatial and reciprocal space sample grids, respectively.

APPENDIX B: FRESNEL–KIRCHHOFF DIFFRACTION THEORY

I consider a complex amplitude incident upon a diffracting object, which I define as uinc(x,y,z). I assume that the diffracting object is contained within the planes z=z1 and z=z1+Δz1. Then, I apply the projection approximation, as outlined in Section 1.A, to obtain the field in the plane z=z1+Δz1 as u(x,y,z1+Δz1)=uinc(x,y,z1+Δz1)exp(iϕ1(x,y)), where ϕ1(x,y) is evaluated according to Eq. (21) with zi=z1 and zi+1=z1+Δz. The field at some arbitrary observation point, downstream of the diffracting object, may then be found according to

u(x,y,zo)=K(x,y,x,y,zo(z1+Δz1))uinc(x,y,z1+Δz1)exp(iϕ1(x,y))dxdy,
where K is the Fresnel–Kirchhoff integral kernel given by [13]
K(x,y,x,y,z)=iλzexp(ikz)exp(ik(xx)2+(yy)22z).
The integral in Eq. (B1) must be evaluated over an infinite plane, which is unsuitable for numerical integration. Instead, an often-used technique is to rewrite the integral as
u(x,y,zo)=uinc(x,y,zo)+Ω1K(x,y,x,y,zo(z1+Δz1))·uinc(x,y,z1+Δz1)(exp(iϕ1(x,y))1)dxdy,
where uinc(x,y,zo) is the field incident directly on (x,y,zo) in the absence of the diffracting aperture, and Ω1 is the transverse extent of the diffracting object. Using this approach, the concept of the scattered field, usc(x,y,zo), can be defined by decomposing u(x,y,zo) as
u(x,y,zo)=uinc(x,y,zo)+usc(x,y,zo).
If now a second diffracting object is introduced within the planes z=z2 and z=z2+Δz2, with z1+Δz1z2z2+Δz2zo, the field at (x,y,zo) may be found by evaluating Eq. (B3), by noting that the field incident upon the second sphere is given as u(x,y,z2)=uinc(x,y,z2)+usc(x,y,z2), which allows the field on the observation plane to be evaluated as
u(x,y,zo)=uinc(x,y,zo)+usc(x,y,zo)+Ω2K(x,y,x,y,zo(z2+Δz2))(uinc(x,y,z2+Δz2)+usc(x,y,z2+Δz2))(exp(iϕ2(x,y))1)dxdy.
Up until this point, I have used the notation usc(x,y,z) to refer to the field scattered by the first diffracting object only. In order to avoid confusion, I now introduce usc,1(x,y,zo) and usc,2(x,y,zo), which refer to the fields scattered by the two diffracting objects, respectively, each in isolation. This allows the field due to two diffracting objects to be decomposed as
u(x,y,zo)=uinc(x,y,zo)+usc,1(x,y,zo)+usc,2(x,y,zo)+usc,sc(x,y,zo),
where uinc(x,y,zo) is the field directly incident in the absence of any diffracting objects, and usc,sc(x,y,zo) is the multiply scattered field.

Funding

Royal Society (UF130304); Engineering and Physical Sciences Research Council (EPSRC) (EP/P005209/1).

Acknowledgment

P. M. is supported by a Royal Society University Research Fellowship.

REFERENCES

1. J. M. Cowley and A. F. Moodie, “The scattering of electrons by atoms and crystals. I. A new theoretical approach,” Acta Crystallogr. 10, 609–619 (1957). [CrossRef]  

2. A. Hare and G. Morrison, “Near-field soft x-ray diffraction modelled by the multislice method,” J. Mod. Opt. 41, 31–48 (1994). [CrossRef]  

3. Y. Wang, “A numerical study of resolution and contrast in soft x-ray contact microscopy,” J. Microsc. 191, 159–169 (1998). [CrossRef]  

4. A. Malecki, G. Potdevin, and F. Pfeiffer, “Quantitative wave-optical numerical analysis of the dark-field signal in grating-based x-ray interferometry,” Europhys. Lett. 99, 48001 (2012). [CrossRef]  

5. K. Li, M. Wojcik, and C. Jacobsen, “Multislice does it all—calculating the performance of nanofocusing x-ray optics,” Opt. Express 25, 1831–1846 (2017). [CrossRef]  

6. D. Paganin, Coherent X-Ray Optics (Oxford University, 2006).

7. J. Goodman, Introduction to Fourier Optics (McGraw-Hill, 2005).

8. E. A. Sziklas and A. E. Siegman, “Mode calculations in unstable resonators with flowing saturable gain. 2: fast Fourier transform method,” Appl. Opt. 14, 1874–1889 (1975). [CrossRef]  

9. E. A. Sziklas and A. E. Siegman, “Diffraction calculations using fast Fourier transform methods,” Proc. IEEE 62, 410–412 (1974). [CrossRef]  

10. K. S. Morgan, K. K. W. Siu, and D. M. Paganin, “The projection approximation and edge contrast for x-ray propagation-based phase contrast imaging of a cylindrical edge,” Opt. Express 18, 9865–9878 (2010). [CrossRef]  

11. K. Matsushima and T. Shimobaba, “Band-limited angular spectrum method for numerical simulation of free-space propagation in far and near fields,” Opt. Express 17, 19662–19673 (2009). [CrossRef]  

12. P. R. T. Munro, “Multi-slice x-ray beam propagation code,” http://prtmunro.net.

13. P. R. T. Munro, K. Ignatyev, R. Speller, and A. Olivo, “The relationship between wave and geometrical optics models of coded aperture type x-ray phase contrast imaging systems,” Opt. Express 18, 4103–4117 (2010). [CrossRef]  

14. J. W. Goodman, Laser Speckle and Related Phenomena (Springer-Verlag, 1975).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Schematic diagram of the system studied in this paper. A point source is located at the origin of the Cartesian coordinate system. Refractive index inhomogeneities may exist for z>Δzso, one example of which is illustrated in a layer bounded by the planes z=zi and z=zi+1=zi+Δzi.
Fig. 2.
Fig. 2. Diagram illustrating the principal coordinate system notation used in this paper.
Fig. 3.
Fig. 3. Illustration of the region in reciprocal space (the intersection of |fx|<fco and |fy|<fco) where the angular spectrum of the field is permitted to reside, along with a guard band where angular spectrum content is filtered after propagation through each layer.
Fig. 4.
Fig. 4. Schematic diagram of the diffracting aperture and point source upon which all examples in Section 3 are based. Values of W=20μm, za=1.6m, and zo=1.9m were chosen. Note that the diagram is not drawn to scale.
Fig. 5.
Fig. 5. Magnitude of the field diffracted by the aperture found by direct evaluation of the Fresnel–Kirchhoff diffraction integral.
Fig. 6.
Fig. 6. Plots of the magnitude of the error in each calculated complex amplitude relative to that calculated using Fresnel–Kirchhoff diffraction theory for a diffracting aperture.
Fig. 7.
Fig. 7. Plot of the error ϵ as a function of N, where the reference field is that calculated using Fresnel–Kirchhoff diffraction theory.
Fig. 8.
Fig. 8. Magnitude of the field diffracted by the aperture and a single sphere found by direct evaluation of the Fresnel–Kirchhoff diffraction integral.
Fig. 9.
Fig. 9. Plots of the magnitude of the error in each calculated complex amplitude relative to that calculated using Fresnel–Kirchhoff diffraction theory for the case of a single sphere and diffracting aperture.
Fig. 10.
Fig. 10. Magnitude of the field obtained when two spheres are located between the aperture and observation plane (left) and the magnitude of the field obtained by considering the two spheres in isolation (right).
Fig. 11.
Fig. 11. Magnitude of the field that results when an ensemble of spheres is located between a square diffracting aperture and the observation plane. Each group of three images corresponds to a different value of δ for the spheres, including the top left group, which represents free space. The middle image in each group is a magnified view of the region denoted by the outline box in the top image. The lowest image shows a histogram of the magnitudes of the pixels within the magnified region along with a Rayleigh distribution with mean equal to that of the magnified region.
Fig. 12.
Fig. 12. Plots of computation time per sample point in the observation plane for the Fresnel–Kirchhoff and primed coordinate system angular spectrum methods.

Equations (73)

Equations on this page are rendered with MathJax. Learn more.

2ux2+2uy2+2ikuz=0.
U(a,b,z)=u(x,y,z)exp(i2π(x(a/λ)+y(b/λ)))dxdy,
u(x,y,z)=U(a,b,z)exp(i2π(x(a/λ)+y(b/λ)))·d(aλ)d(bλ).
P(a,b,Δz)=exp(ik(a2+b2)Δz/2)
uinc(x,y,z)=exp(ik(x2+y2)/(2z))/z,
uinc(x,y,z)=exp(iϕ(x,y,z))/z,
finc,x=12πxϕ(x,y,z)=xλz,
finc,y=12πyϕ(x,y,z)=yλz.
Uinc(a,b,z)=iλexp(i(a2+b2)kz/2),
finc,a=azλ,
finc,b=bzλ.
x˜=y˜={(jN/2)Δx|0j<N},
a˜=b˜={λ(jN/2)/(NΔx)|0j<N}.
Δx12max(|finc,x|),
Δa12max(|finc,a|),
max(|finc,x|)=ΔxN2λΔzso,
max(|finc,a|)=Δzso+Δzod2Δx.
ΔxλΔzsoN,
Δxλ(Δzso+Δzod)N,
n(x,y,z)=1δ(x,y,z)+iβ(x,y,z).
ϕ(x,y)=kzizi+1(δ(x,y,z)+iβ(x,y,z))dz,
u(x,y,zi)=exp(ik(x2+y2)/(2zi))v(xi,yi,zi)/zi,
xi(x,z)=xMi,
yi(x,z)=yMi,
zi(z)=zziMi,
V(ai,bi,zi(zi+1))=P(ai,bi,zi(zi+1))V(ai,bi,0),
u(x,y,zi+1)=exp(ik(x2+y2)/(2zi+1))v(xi,yi,zi(zi+1))zi+1,
x=(zi+1/zi)xi(x,zi+1)=Mi(zi+1)xi(x,zi+1),
y=(zi+1/zi)yi(y,zi+1)=Mi(zi+1)yi(x,zi+1),
z=zi+1,
tBL(x˜,y˜)=F^1{W(f˜x,f˜y)T(f˜x,f˜y)},
V(a0,b0,Δz0)=P(a0,b0,Δz0)V(a0,b0,0),
x=M0(z1)x0(x,z1),
y=M0(z1)y0(y,z1).
x1(x,z1)=M0(z1)x0(x,z1),
y1(x,z1)=M0(z1)y0(y,z1).
x2(x,z2)=x=M1(z2)M0(z1)x0(x,z1),
y2(x,z2)=y=M1(z2)M0(z1)y0(y,z1).
xi(x,zi)=x=x0(x,z1)j=0i1Mj(zj+1),
yi(y,zi)=y=y0(y,z1)j=0i1Mj(zj+1),
Mi(zi+1)xi(x,zi+1)=x=x0(x,z1)j=0iMj(zj+1),
Mi(zi+1)yi(y,zi+1)=y=y0(y,z1)j=0iMj(zj+1)
v^(x^i,y^i,zi(zi+1))=exp(iϕ(Mi(zi+1)x^i,Mi(zi+1)y^i))·F^1{P(a^i,b^i,zi(zi+1))F^{v^(x^i,y^i,zi(zi))}}.
v^(x^i+1Mi(zi+1),y^i+1Mi(zi+1),zi+1(zi+1))=v^(x^i,y^i,zi(zi+1)).
x^i=x^0j=0i1Mj(zj+1),
y^i=y^0j=0i1Mj(zj+1),
a^i=a^0/j=0i1Mj(zj+1),
b^i=b^0/j=0i1Mj(zj+1),
x˜0=y˜0={(jN/2)Δx|0j<N}
a˜0=b˜0={λ(jN/2)/(NΔx)|0j<N}.
t(Mx,My)=F1{T(fx/M,fy/M)}/M2,
Δx12fco.
ΔfxzNs2λ(zNsz0)z0fco,
N2λ(zNsz0)z0fcoΔxzNs.
n(x,y,z)=1(δ(x,y,z)δi)+i(β(x,y,z)βi),
P(a,b,Δz)=exp(ik1δi+iβi(a2+b2)Δz/2).
Ti=exp(ik(zi+1zi)(δi+iβi)),
Δxλ(zoza)zaNzo,
ϵ=i,j|u(iΔx,jΔy)uref(iΔx,jΔy)|2i,j|uref(iΔx,jΔy)|2,
tsphere(x,y)={exp(ik2(δ+iβ)R2x2y2)x2+y2R21Otherwise,
Tsphere(fx,fy)=tsphere(x,y)exp(i2π(xfx+yfy))dxdy=δ(fx,fy)+R2x2R2x2RR(tsphere(x,y)1)exp(i2π(xfx+yfy))dxdy,
x=ρcosϕ,
y=ρsinϕ,
fx=ξcosϕ,
fy=ξsinϕ,
Tsphere(ξ)=δ(fx,fy)+02π0R(tsphere(ρ)1)exp(i2πρξcos(ϕϕ))ρdρdϕ=δ(fx,fy)+2π0R(tsphere(ρ)1)ρJ0(ρξ)dρ,
tsphere,BL(x˜,y˜)=F^1{W(f˜x,f˜y)Tsphere(f˜x2+f˜y2)},
u(x,y,zo)=K(x,y,x,y,zo(z1+Δz1))uinc(x,y,z1+Δz1)exp(iϕ1(x,y))dxdy,
K(x,y,x,y,z)=iλzexp(ikz)exp(ik(xx)2+(yy)22z).
u(x,y,zo)=uinc(x,y,zo)+Ω1K(x,y,x,y,zo(z1+Δz1))·uinc(x,y,z1+Δz1)(exp(iϕ1(x,y))1)dxdy,
u(x,y,zo)=uinc(x,y,zo)+usc(x,y,zo).
u(x,y,zo)=uinc(x,y,zo)+usc(x,y,zo)+Ω2K(x,y,x,y,zo(z2+Δz2))(uinc(x,y,z2+Δz2)+usc(x,y,z2+Δz2))(exp(iϕ2(x,y))1)dxdy.
u(x,y,zo)=uinc(x,y,zo)+usc,1(x,y,zo)+usc,2(x,y,zo)+usc,sc(x,y,zo),
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.