Difference between revisions of "Analysis of Step 4"

From CFD Benchmark
Jump to: navigation, search
Line 1: Line 1:
'''Step 4''' is definitely the most complex but also the most interesting case to compare '''high-fidelity codes''' for '''turbulent
+
'''Step 4''' is definitely the most complex but also the most interesting case to compare '''high-fidelity codes''' for '''turbulent reacting flows''', since it contains all the features relevant for '''turbulent combustion'''. Therefore, a more detailed analysis is useful. The comparisons will involve:
reacting flows''', since it contains all the features relevant for '''turbulent combustion'''. Therefore, a more detailed analysis
+
is useful. The comparisons will involve:
+
  
1. The evolution of maximum temperature versus time, as depicted in Fig. 10;
+
[[File:]]
  
2. Velocity fields at <math>t = 2 \tau_{ref} = 0.5 ms</math> along the centerlines of the domain, as shown in Fig. 11;
+
Profile of temperature at <math>t = 2 \tau_{ref} ms</math>, <math>x = 0.5L</math>, and <math>z = 0.5L</math> for '''3-D non-reacting multi-species flow''' ('''Step 3''')
 +
 
 +
[[File:]]
 +
 
 +
1. The '''evolution of maximum temperature''' versus '''time''', as depicted in Fig. 10;
 +
 
 +
2. '''Velocity fields''' at <math>t = 2 \tau_{ref} = 0.5 ms</math> along the centerlines of the domain, as shown in Fig. 11;
  
 
3. Profiles of temperature, heat release and mass fractions of H2, O2 and OH at <math>t = 2 \tau_{ref} = 0.5 ms</math> along the
 
3. Profiles of temperature, heat release and mass fractions of H2, O2 and OH at <math>t = 2 \tau_{ref} = 0.5 ms</math> along the
Line 11: Line 15:
  
 
The simulations of '''YALES2''', '''DINO''' and '''Nek5000''' are presented in the following subsections for two different
 
The simulations of '''YALES2''', '''DINO''' and '''Nek5000''' are presented in the following subsections for two different
resolutions in space (2563 and 5123
+
resolutions in space (<math>256^3</math> and <math>512^3</math>), in order to check the impact of the '''spatial resolution''' on the results. Additional data for other grids are also available (<math>384^3</math>
), in order to check the impact of the '''spatial resolution''' on the results. Additional
+
for both '''YALES2''' and '''DINO''', and 7683 only for '''DINO'''). They are not discussed at length in the text and in separate figures in the interest of space, but the corresponding values are included in the Tables 4 and 5 summarizing all results of '''Step 4'''. Additionally, all results at all grid resolutions are available online in the benchmark repository <ref name="TGV-coria-cfd">[https://benchmark.coria-cfd.fr benchmark.coria-cfd.fr]</ref>.
data for other grids are also available (3843
+
for both '''YALES2''' and '''DINO''', and 7683 only for '''DINO'''). They are not
+
discussed at length in the text and in separate figures in the interest of space, but the corresponding values are
+
included in the Tables 4 and 5 summarizing all results of '''Step 4'''. Additionally, all results at all grid resolutions are
+
available online in the benchmark repository <ref name="TGV-coria-cfd">[https://benchmark.coria-cfd.fr benchmark.coria-cfd.fr]</ref>.
+
 
Starting with the evolution of maximum temperature versus time, a perfect visual agreement between all three
 
Starting with the evolution of maximum temperature versus time, a perfect visual agreement between all three
codes is observed at all resolutions, as shown in Fig. 10 (with a resolution of 5123
+
codes is observed at all resolutions, as shown in Fig. 10 (with a resolution of <math>512^3</math>
 
). This quantity does not appear
 
). This quantity does not appear
 
to be difficult to predict correctly, as already observed previously for the non-reacting flow in '''Step 3''', provided that the pressure variation due to the heat release is correctly taken into account.
 
to be difficult to predict correctly, as already observed previously for the non-reacting flow in '''Step 3''', provided that the pressure variation due to the heat release is correctly taken into account.
Line 25: Line 24:
 
== Comparing results at spatial resolution of <math>256^3</math> ==
 
== Comparing results at spatial resolution of <math>256^3</math> ==
  
The results shown in this section have been obtained for the same grid size than in '''Step 3''', i.e. 2563
+
The results shown in this section have been obtained for the same grid size than in '''Step 3''', i.e. <math>256^3</math>
 
for '''YALES2'''
 
for '''YALES2'''
 
and '''DINO''' and 2523
 
and '''DINO''' and 2523
Line 39: Line 38:
 
case, the typical cell size is approximately 25 µm. To check this last point, the simulations have been repeated with
 
case, the typical cell size is approximately 25 µm. To check this last point, the simulations have been repeated with
 
a finer spatial resolution, as discussed in the next subsection.
 
a finer spatial resolution, as discussed in the next subsection.
 +
 +
[[File:vx_x_lm_256.pdf]]
 +
[[File:vy_y_lm_256.pdf]]
 +
 +
'''Velocity''' at time <math>t = 2τref = 0.5 ms</math> for '''3-D reacting case''' ('''Step 4''') with <math>N = 256^3</math> grid points.
 +
 +
[[File:T_y_lm_256.pdf]]
 +
[[File:hr_y_lm_256_withzoom.pdf]]
 +
 +
'''Temperature''' (left) and '''heat release profiles''' (right) at <math>x = 0.5 L</math>, <math>z = 0.5 L</math>, and time <math>t = 2 \tau_{ref} = 0.5 ms</math> for '''3-D reacting case'''
 +
('''Step 4''') with <math>N = 256^3</math> grid points.
 +
 +
[[File:.pdf]]
 +
[[File:.pdf]]
 +
[[File:.pdf]]
 +
 +
Mass fraction profiles of <math>H_2</math>, <math>O_2</math>, and <math>OH</math> at </math>x = 0.5 L</math>, <math>z = 0.5 L</math>, and time <math>t = 2\tau_{ref} = 0.5 ms</math> for 3-D reacting case (Step 4)
 +
with <math>N = 256^3</math> grid points.
  
 
== Comparing results at spatial resolution of <math>512^3</math> ==
 
== Comparing results at spatial resolution of <math>512^3</math> ==
  
The present results have been obtained on a grid size of 5123
+
The present results have been obtained on a grid size of <math>512^3</math> for '''YALES2''' and '''DINO''', while '''Nek5000''' relies on
for '''YALES2''' and '''DINO''', while '''Nek5000''' relies on
+
 
similar '''discretization''' size of 5143
 
similar '''discretization''' size of 5143
 
(57 '''spectral elements''' of order 9 in each direction). To reduce computational costs,
 
(57 '''spectral elements''' of order 9 in each direction). To reduce computational costs,
 
the simulation is conducted only for the first <math>t = 2 \tau_{ref} = 0.5 ms</math> of physical time.
 
the simulation is conducted only for the first <math>t = 2 \tau_{ref} = 0.5 ms</math> of physical time.
Only the quantities showing visible discrepancies at a resolution of 2563
+
Only the quantities showing visible discrepancies at a resolution of <math>256^3</math>
 
(heat release, <math>Y_{O_2}</math>
 
(heat release, <math>Y_{O_2}</math>
 
, <math>Y_{OH}</math>) are discussed here
 
, <math>Y_{OH}</math>) are discussed here
Line 56: Line 72:
 
next subsection.
 
next subsection.
  
== Quantitative comparisons at the center point at t = 0.5 ms ==
+
[[File:.pdf]]
 +
[[File:.pdf]]
 +
 
 +
'''Temperature''' (left) and '''heat release profiles''' (right) at <math>x = 0.5 L</math>, <math>z = 0.5 L</math>, and time <math>t = 2 \tau_{ref} = 0.5 ms</math> for '''3-D reacting case''' ('''Step 4''') with <math>N = 512^3</math> grid points.
 +
 
 +
[[File:.pdf]]
 +
[[File:.pdf]]
 +
 
 +
Mass fraction profiles of <math>O_2</math> (left), and <math>OH</math> (right) at <math>x = 0.5 L</math>, <math>z = 0.5 L</math>, and time <math>t = 2 \tau_{ref} = 0.5 ms</math> for '''3-D reacting case'''
 +
('''Step 4''') with <math>N = 512^3</math> grid points.
 +
 
 +
== Quantitative comparisons at the center point at <math>t = 0.5 ms</math> ==
  
 
In Table 4 the values of different variables at the center of the numerical domain at time <math>t = 2 \tau_{ref} = 0.5 ms</math> are
 
In Table 4 the values of different variables at the center of the numerical domain at time <math>t = 2 \tau_{ref} = 0.5 ms</math> are
Line 84: Line 111:
 
• Increasing further the '''spatial resolution''' (which is also connected to a '''reduction of the timestep''') does not seem
 
• Increasing further the '''spatial resolution''' (which is also connected to a '''reduction of the timestep''') does not seem
 
to increase much the observed agreement between the codes. For all considered grids in the analysis finer
 
to increase much the observed agreement between the codes. For all considered grids in the analysis finer
than 2563, overall differences of the order of 1% are observed. Often, using a finer resolution leads to a better
+
than <math>256^3</math>, overall differences of the order of 1% are observed. Often, using a finer resolution leads to a better
 
agreement for most of the indicators, but to a worse comparison for some other ones.
 
agreement for most of the indicators, but to a worse comparison for some other ones.
  
Line 94: Line 121:
  
 
Finally, the central finding is that all codes employed in the benchmark deliver suitable results for this configuration,
 
Finally, the central finding is that all codes employed in the benchmark deliver suitable results for this configuration,
and this already at a typical grid resolution of 2563
+
and this already at a typical grid resolution of <math>256^3</math>
 
for this particular case. An irreducible uncertainty of the order
 
for this particular case. An irreducible uncertainty of the order
 
of 1% is observed for all quantities relevant for turbulent combustion. This uncertainty, noticeably larger than for
 
of 1% is observed for all quantities relevant for turbulent combustion. This uncertainty, noticeably larger than for

Revision as of 18:31, 24 August 2020

Step 4 is definitely the most complex but also the most interesting case to compare high-fidelity codes for turbulent reacting flows, since it contains all the features relevant for turbulent combustion. Therefore, a more detailed analysis is useful. The comparisons will involve:

[[File:]]

Profile of temperature at , , and for 3-D non-reacting multi-species flow (Step 3)

[[File:]]

1. The evolution of maximum temperature versus time, as depicted in Fig. 10;

2. Velocity fields at along the centerlines of the domain, as shown in Fig. 11;

3. Profiles of temperature, heat release and mass fractions of H2, O2 and OH at along the centerline of the domain, as illustrated in Figs. 12 and 13.

The simulations of YALES2, DINO and Nek5000 are presented in the following subsections for two different resolutions in space ( and ), in order to check the impact of the spatial resolution on the results. Additional data for other grids are also available ( for both YALES2 and DINO, and 7683 only for DINO). They are not discussed at length in the text and in separate figures in the interest of space, but the corresponding values are included in the Tables 4 and 5 summarizing all results of Step 4. Additionally, all results at all grid resolutions are available online in the benchmark repository [1]. Starting with the evolution of maximum temperature versus time, a perfect visual agreement between all three codes is observed at all resolutions, as shown in Fig. 10 (with a resolution of ). This quantity does not appear to be difficult to predict correctly, as already observed previously for the non-reacting flow in Step 3, provided that the pressure variation due to the heat release is correctly taken into account.

Comparing results at spatial resolution of

The results shown in this section have been obtained for the same grid size than in Step 3, i.e. for YALES2 and DINO and 2523 for Nek5000. The corresponding results for velocity (Fig. 11) and temperature (Fig. 12, left) at time along the centerlines of the domain show visually a perfect agreement. Nevertheless, the three codes show slight differences concerning heat release and some mass fractions profiles (in particular and ) around the center of the domain, as it can be observed from Figs. 12 (right) and 13. These differences – though small – are larger than those experienced in the non-reacting case. Note that there is originally no oxygen in this region, explaining why the mass fraction of is still smaller than the mass fraction of there. One reason behind these discrepancies might be the well-known stiffness of the chemical source terms, inducing different non-linear effects as a function of the underlying algorithms employed for integration in time. Another possible source of error is the employed spatial discretization, which might still be insufficient to perfectly capture the reaction front; in the present case, the typical cell size is approximately 25 µm. To check this last point, the simulations have been repeated with a finer spatial resolution, as discussed in the next subsection.

File:Vx x lm 256.pdf File:Vy y lm 256.pdf

Velocity at time for 3-D reacting case (Step 4) with grid points.

File:T y lm 256.pdf File:Hr y lm 256 withzoom.pdf

Temperature (left) and heat release profiles (right) at , , and time for 3-D reacting case (Step 4) with grid points.

File:.pdf File:.pdf File:.pdf

Mass fraction profiles of , , and at </math>x = 0.5 L</math>, , and time for 3-D reacting case (Step 4) with grid points.

Comparing results at spatial resolution of

The present results have been obtained on a grid size of for YALES2 and DINO, while Nek5000 relies on similar discretization size of 5143 (57 spectral elements of order 9 in each direction). To reduce computational costs, the simulation is conducted only for the first of physical time. Only the quantities showing visible discrepancies at a resolution of (heat release, , ) are discussed here in the interest of brevity, since all other quantities already revealed a perfect agreement for the previous resolution. It can be observed in Figs. 14 and 15 that doubling the spatial resolution in each direction did not improve the comparisons in a clear way; marginal differences still exist between the codes, and a convergence towards a unique solution is not really visible. To discuss this issue in more detail, a refined analysis is necessary, as discussed in the next subsection.

File:.pdf File:.pdf

Temperature (left) and heat release profiles (right) at , , and time for 3-D reacting case (Step 4) with grid points.

File:.pdf File:.pdf

Mass fraction profiles of (left), and (right) at , , and time for 3-D reacting case (Step 4) with grid points.

Quantitative comparisons at the center point at

In Table 4 the values of different variables at the center of the numerical domain at time are presented and analyzed. These values have been obtained for the three different codes involved in the benchmark (from left to right, YALES2, DINO, Nek5000), for an increasing spatial resolution from left to right, but also with different timesteps. The controlling time-limiter (as a condition on maximum CFL or Fourier number with corresponding value) is also listed in the table; it depends on the retained criteria and on the explicit or implicit integration of the corresponding terms in the equations.

Looking separately at the values obtained by each code, it is not always easy to recognize the convergence toward a single value that would be expected for a grid-independence analysis. By a comparison between the last column for each code, a good agreement is overall observed, in spite of differences regarding algorithms, resolution in space and in time. Nevertheless, the agreement is never perfect, and trends can better be seen by computing differences. This is why, choosing arbitrarily the results of the implicit time integration at the highest spatial resolution with DINO (7683) as a reference, all corresponding relative errors have been computed.

Analyzing in detail all the values, the following intermediate conclusions can be drawn:

• The overall agreement between the three completely independent high-resolution codes employed in the benchmark is very good, with typical relative differences of the order of 1% for the essential quantities used to analyze turbulent combustion (temperature, mass fractions, heat release). 23

• Compared to the differences observed in the previous verification step (errors below 0.03%), the variations are obviously much larger, typically by two orders of magnitude. This is a result of the far more challenging configuration, with additional physicochemical complexity, stiffer profiles, highly non-linear processes in space and time.

• Increasing further the spatial resolution (which is also connected to a reduction of the timestep) does not seem to increase much the observed agreement between the codes. For all considered grids in the analysis finer than , overall differences of the order of 1% are observed. Often, using a finer resolution leads to a better agreement for most of the indicators, but to a worse comparison for some other ones.

• Though this has been attempted, it was impossible to obtain meaningful predictions using the Richardson extrapolation [2][3], since the results of all codes are non-monotonic when increasing resolution in space.

• Somewhat unexpectedly, the observed uncertainty is in the same range for temperature, mass fractions of main species or of radicals, and heat release. Quantities that are typically considered more sensitive (radicals, heat release) do not lead to larger discrepancies in the analysis.

Finally, the central finding is that all codes employed in the benchmark deliver suitable results for this configuration, and this already at a typical grid resolution of for this particular case. An irreducible uncertainty of the order of 1% is observed for all quantities relevant for turbulent combustion. This uncertainty, noticeably larger than for cold flows, is apparently the result of stiff non-linear processes, of different splitting schemes, and of the different libraries/library versions employed for computing thermodynamic, diffusion, and reaction parameters. After this detailed analysis of uncertainty, it is necessary to quantify the corresponding numerical costs needed to get this level of accuracy.

References

  1. benchmark.coria-cfd.fr
  2. J.H. Ferziger, M. Peric, Computational Methods for Fluid Dynamics, Springer , 2012, Bibtex
    Author : J.H. Ferziger, M. Peric
    Title : Computational Methods for Fluid Dynamics
    In : Springer -
    Address :
    Date : 2012
  3. I.B. Celik, U. Ghia, P.J. Roache, C.J. Freitas, H. Coleman,, P.E. Raad, Procedure for estimation and reporting of uncertainty due to discretization in CFD applications, J. Fluids Eng. 130, 2008, Bibtex
    Author : I.B. Celik, U. Ghia, P.J. Roache, C.J. Freitas, H. Coleman,, P.E. Raad
    Title : Procedure for estimation and reporting of uncertainty due to discretization in CFD applications
    In : J. Fluids Eng. -
    Address :
    Date : 2008