Over the last few weeks, VEXTEC has explored the two main philosophies of modern structural design: safe life design and damage tolerant design. Today we share with you a summary of these methods, including their benefits and limitations. We will also look at other considerations that are commonly encountered when implementing either method in a manufacturing environment, and how VEXTEC’s Virtual Life Management technology can help.

**Safe Life Design**

Safe life design is a common method to predict the durability of a design in many industries, including automotive, pressure vessels, bearings, and portions of jet engines and aircraft landing gear. It is employed by manufacturers when it is understood that regular structural inspection of their products would not be possible or practical. Components are removed from service when their calculated safe life expires, regardless of the condition of the component.

Safe life is based on laboratory testing of simple specimens that are cyclically-loaded to create test points in a S-N curve. Damage initiates naturally within the material microstructure and grows to final failure. The resulting S-N curve has three major limitations: 1) due to the large scatter in lifetime, many tests must be performed to get a statistically-confident dataset; 2) the simple test specimens are not similar to the design of complex components (lack of similitude); and 3) the analysis usually considers only the peak stress that the component should see. The design method does not account for any unforeseen rogue flaws in the material, nor rare stress spikes in operation. Thus, large safety factors are applied to the design allowable life to theoretically ensure that any naturally-initiated damage will not pose a threat to the acceptable reliability for the life of the structure. The safety factors may be in terms of life or load, or in the case of pressure vessels, both life *and* load. While the goal of safety factors (or ‘knockdown factors’ or ‘margins of safety’, as they are also known) is admirable, their implementation is arbitrary at best. Using them leads to structures that are heavy and overly conservative, and add to the cost of manufacturing (in terms of necessary raw material, processing, and machining). There is no standardized way of implementing these factors across industries, because institutional knowledge will often trump hard physics when it comes to design.

**Damage Tolerant Design**

Damage tolerant design is a methodology which uses the physics-based principles of fracture mechanics (in particular, the stress intensity factor) to account for and measure the initiation and growth of damage in a structure in operational service. Industries which employ this methodology include military and civil aerospace and rotorcraft, heavy duty cranes, and bridges. Manufacturers who use this method can design lighter products or more severe loading cycles, as routine periodic inspection protocols are used to observe, detect, and repair fatigue cracks before reaching critical sizes.

Traditional fracture mechanics has less scatter in the developing of a S-N curve as opposed to safe life design, and can account for the geometry and stress distributions of complex designs. Residual stress can also be accounted for in the stress intensity factor, and complex load missions can also be assessed. The main limitation in damage tolerant design is that a crack must be assumed to be present in the component because fracture mechanics predicts the cycles for the crack to grow from one size to another. Linear elastic fracture mechanics, the most commonly used form of fracture mechanics, assumes the crack is large relative to the material’s microstructure. By assuming this initial crack size, the percent of the lifetime that would have actually been spent developing that initial crack in reality is not accounted for in the durability of the design. This conservative assumption can greatly reduce the design allowable life. Advanced fracture mechanics methods have been developed in recent decades to both reduce the size of the assumed initial crack and to understand fatigue crack closure effects. Additional testing to develop these methods can prove to be expensive, and the cost of these paired with the additional costs of routine end user inspections must be weighed during the design and manufacturing processes.

**VLM-Assisted Design**

VEXTEC’s Virtual Life Management (VLM) technology can reduce assumptions and cost in either type of design, or can be used to combine the best aspects of safe life with damage tolerance to create a “total life” approach:

a) In safe life design, VLM can be used to predict the stress and strain vs. fatigue life (S-N curve) test results of simple test specimens, as well as components with complex geometry and loading. As VLM uses finite element methods, the simple and complex geometries drive the differences in the fatigue results (using the same set of empirical material parameters), so the similitude issue in safe life design is avoided. Virtual Life Management can be implemented without the need for large safety factors, as it has been shown to predict the mean and scatter in fatigue life of a final component’s design. Virtually testing many designs first can result in huge cost savings in money and time compared to manufacturing and testing of a multitude of “possible design” components.

b) The VLM approach can also be used in damage tolerance analysis methods without the need for the large initial crack assumption. The use of Monte Carlo statistical techniques advance fracture mechanics to predict actual fatigue response and its scatter. The method simulates fatigue crack growth starting at the naturally-occurring microstructural features (rather than an assumed large initial crack size) using the appropriate fracture mechanics processes (crack nucleation, short-crack growth, long-crack growth) to more accurately predict the fatigue durability of complex components experiencing complex loading. This analysis can provide a more realistic (and less-conservative) life prediction, providing more value to manufactured components.

c) By integrating safe life with fracture mechanics, VLM creates a *total lifetime prediction* from the initially unflawed component to final failure. VLM simulates the material at the fundamental level to include all of the individual grains, voids and inclusions that exist in the material microstructure of the component. As viewed under a microscope, the microstructure of each component and each location in the component is different. VLM uses probabilistic methods to capture these differences by representing the material as space filling 3-D statistical volume elements. Structural finite element analysis methods are used to capture the complex stress and temperature variations throughout the 3-D component along with geometry and residual stresses. The finite element stresses and temperatures are combined with the material volume elements to simulate damage as it initiates and grows through the component. This process captures the similitude needed for safe life and predicts the number of cycles spent initiating a crack for damage tolerance; all using probabilistic methods to capture the uncertainties that exist in a fleet of components, and thus predicting the true probability of when components fail in the field.

We hope you have found this series on structural design principles informative. Please contact us if you feel that your company could benefit from more-informed decision making capabilities in your products’ design process.

A good summary. However, the lack of detail means that it is too much motherhood and apple pie. The key issue is how the analysis of the growth of small naturally occuring initial flaws is done. How are the stress intensity factors calculated and what crack growth equation is used. How do you account for the lack of closure for such small initial defects and how is load interaction accounted for, etc.

Good questions. VLM provides a software framework using a statistically representative volume element (SVE) approach when the damage is small such as short cracks. This models the material as a space filling network of features including grains, grain boundaries, inclusions, voids, martensitic lathes, etc. Statistical distributions of properties such as size, location, orientation, stiffness, strength etc. are assigned to each feature. Then the governing equations that you speak of are used to propagate the damage through the random microstructure. These can be cyclic crystal plasticity models based on dislocation theory such a Tanaka and Mura , Bilby-Cottrell-Swinden, or purely empirical model. Crack closure models such as those by Irwin are applicable. The SVE approach provides the statistical distribution of all of the cracks in the structure as a function of cycles. For detail see: “Microstructural-Based Physics of Failure Models to Predict Fatigue Reliability,” Robert G. Tryon, Animesh Dey, and Ganapathi Krishnan, Journal of the IEST, Vol. 50, No. 2, 2007, pp. 73 – 84. Let me know if you wish me to email the paper.

Yes please do send the paper.

The problem is that these approaches do not reproduce the crack depth versus cycles histories seen in tests where the initial cracks are naturally occuring, typically 1-30 microns deep. In such cases it is now generally accepted there are little R ratio effects and nence little closure. In such cases the statistical variation in the depth versus cycles data can be (approximately) accounted for by the variation in the local threshold. Hence for all practical purposes you only have 1 parameter to vary. This makes the determination of the probabilty distribution of the life curves very much simpler. The Tanaka and BCS approaches thus become superseeded as theory and practice have now moved on from these various older approximate theories which although mathematically pretty are at odds with experimental data.

The jury is out for cracks that are less than 1 micron as there is insufficient data to make a judgement. The same goes for cracks in very thin, i.e. 10-100 micron thick, metal foils, although there is limited data for this case. The problem appears to be that for the thinner foils the behaviour is similar to sub micron cracks in thick structures.