Ideas.  Interesting.  Public catering.  Production.  Management.  Agriculture

Reliability prediction. Forecasting reliability indicators of onboard equipment of space vehicles under the influence of low-intensity ionizing radiation. using block diagrams

To assess the approximation of the empirical distribution to the theoretical one, the Romanovsky goodness-of-fit criterion is used, which is determined by the formula:

where is the Pearson criterion;

r is the number of degrees of freedom.

If the condition is satisfied, then this gives grounds for asserting that the theoretical distribution of reliability indicators can be accepted as the law of this distribution.

The Kolmogorov criterion allows us to evaluate the validity of the hypothesis about the distribution law for small volumes of observations of a random variable

where D is the maximum difference between the actual and theoretical cumulative frequencies of the random variable.

On the basis of special tables, the probability P is determined that if a particular variational attribute is distributed along the considered theoretical distribution, then due to purely random reasons, the maximum discrepancy between the actual and theoretical accumulated frequencies will be no less than actually observed.

Based on the calculated value of P, conclusions are drawn:

a) if the probability P is large enough, then the hypothesis that the actual distribution is close to the theoretical one can be considered confirmed;

b) if the probability P is small, then the hypothesis is rejected.

The boundaries of the critical region for the Kolmogorov criterion depend on the sample size: the smaller the number of observation results, the higher it is necessary to set the critical probability value.

If the number of failures during observation was 10-15, then , if more than 100, then . However, it should be noted that for large volumes of observations, it is better to use the Pearson criterion.

The Kolmogorov criterion is much simpler than other goodness of fit criteria, so it is widely used in the study of the reliability of machines and elements.

Question 22. The main tasks of predicting the reliability of machines.

To determine the patterns of changes in the technical condition of the machine in the process of operation, the reliability of machines is predicted.

There are three stages of forecasting: retrospection, diagnostics and forecast. At the first stage, the dynamics of changes in the parameters of the machine in the past is established, at the second stage, the technical condition of the elements is determined in the present, at the third stage, the change in the parameters of the state of the elements in the future is predicted.

The main classes of machine reliability prediction problems can be formulated as follows:

    Predicting the patterns of changes in the reliability of machines in connection with the prospects for the development of production, the introduction of new materials, and an increase in the strength of parts.

    Assessing the reliability of a designed machine before it is manufactured. This problem arises at the design stage.

    Predicting the reliability of a particular machine (assembly, assembly) based on the results of changing its parameters.

    Predicting the reliability of a certain set of machines based on the results of a study of a limited number of prototypes. Problems of this type are faced at the stage of production of equipment.

5. Predicting the reliability of machines under unusual operating conditions (for example, temperature and humidity environment higher than allowed).

The specificity of the construction engineering industry implies the accuracy of solving forecasting problems with an error of no more than 10-15% and the use of forecasting methods that allow obtaining a solution to problems in the shortest possible time.

Methods for predicting the reliability of machines are chosen taking into account the tasks of forecasting, the quantity and quality of the initial information, the nature of the real process of changing the reliability indicator (predicted parameter).

Modern forecasting methods can be divided into three main groups:

Methods of expert assessments;

Modeling methods, including physical, physical-mathematical and information models;

Statistical methods.

Forecasting methods based on expert assessments consist in generalization, statistical processing and analysis of the opinions of specialists regarding the prospects for the development of this area.

Modeling methods are based on the basic principles of the theory of similarity. Based on the similarity of indicators of modification A, the level of reliability of which was studied earlier, and some properties of modification B of the same machine, reliability indicators B are predicted for a certain period of time.

Statistical forecasting methods are based on extrapolation and interpolation of predicted reliability parameters obtained from preliminary studies. The method is based on the regularities of changes in machine reliability parameters over time.

Question 23. Stages of predicting the reliability of machines.

When predicting the reliability of machines, the following sequence is followed:

    Carry out the classification of parts and assembly units according to the principle of responsibility. To parts and assembly units, failures of which are dangerous for people's lives, set higher reliability requirements.

    Formulate the concepts of failure of parts and assembly units of the designed system. In this case, it is necessary to take into account only those parts and assembly units, the failure of which leads to a complete or partial loss of system operability.

3. Choose a reliability prediction method depending on the system design stage, the accuracy of the initial data and the assumptions made.

    A structural diagram of the product is drawn up, including the main functional parts and assembly units, including parts and assembly units of power and kinematic circuits, arranged by levels in the order of their subordination, and reflecting the connections between them.

    All parts and assembly units are considered, starting from the upper level of the block diagram and ending with the lower one, with their division into the following groups:

a) parts and assembly units, the indicators of which should be determined by calculation methods;

b) parts and assembly units with specified reliability indicators, including the assigned failure flow parameters;

c) parts and assembly units, the reliability indicators of which should be determined by experimental statistical methods or test methods.

6. For details and assembly units, the reliability of which is determined by calculation methods:

Spectra of loads and other features of operation are determined, for which they make up functional models of the product and its assembly units, which, for example, can be represented by a state matrix;

Compose models of physical processes leading to failures,

Establish criteria for failures and limit states (destruction from short-term overloads, the onset of wear limit, etc.).

Classify them into groups according to failure criteria and select appropriate calculation methods for each group.

7. If necessary, graphs of the dependence of reliability indicators on time are built, on the basis of which the reliability of individual parts and assembly units is compared, as well as various options for the structural diagrams of the system.

8. On the basis of the performed reliability prediction, a conclusion is made about the suitability of the system for its intended use. If the calculated reliability is lower than the specified one, measures are developed aimed at improving the reliability of the calculated system.

Question 24

As noted above according to the basic principles of calculation properties that make up the reliability, or complex indicators of the reliability of objects are distinguished:

forecasting methods,

Structural calculation methods,

Physical calculation methods,

Methods forecasting are based on the use of data on the achieved values ​​and identified trends in the change in the reliability indicators of analogue objects to assess the expected level of object reliability. ( Objects-analogues - these are objects similar or close to the one under consideration in terms of purpose, principles of operation, circuit design and manufacturing technology, element base and materials used, operating conditions and modes, principles and methods of reliability management).

Structural methods calculation are based on the representation of the object in the form of a logical (structural-functional) diagram that describes the dependence of the states and transitions of the object on the states and transitions of its elements, taking into account their interaction and the functions they perform in the object, followed by descriptions of the constructed structural model by an adequate mathematical model and the calculation of the reliability indicators of the object according to the known characteristics of the reliability of its elements.

Physical methods calculation are based on the use of mathematical models, describe their physical, chemical and other processes leading to failures of objects (to the achievement of the limit state by objects), and the calculation of reliability indicators according to known parameters (object load, characteristics of substances and materials used in the object, taking into account the features of its design and manufacturing techniques.

Methods for calculating the reliability of a particular object are selected depending on: - the goals of the calculation and the requirements for the accuracy of determining the reliability indicators of the object;

Availability and / or possibility of obtaining the initial information necessary for the application of a certain calculation method;

The level of sophistication of the design and manufacturing technology of the object, the system of its maintenance and repair, which makes it possible to apply the appropriate calculation models of reliability. When calculating the reliability of specific objects, it is possible to simultaneously use various methods, for example, methods for predicting the reliability of electronic and electrical components, followed by using the results obtained as input data for calculating the reliability of the object as a whole or its components by various structural methods.

4.2.1. Reliability prediction methods

Forecasting methods are used:

To substantiate the required level of reliability of objects in the development of technical specifications and / or estimate the probability of achieving the specified reliability indicators in the development of technical proposals and analysis of the requirements of the technical assignment (contract);

For an approximate assessment of the expected level of reliability of objects at the early stages of their design, when there is no necessary information for the application of other methods for calculating reliability;

To calculate the failure rate of commercially available and new electronic and electrical components different types taking into account the level of nx loading, workmanship, areas of application of the equipment in which the elements are used;

To calculate the parameters of typical tasks and operations of maintenance and repair of objects, taking into account the design characteristics of the object, which determine its maintainability.

To predict the reliability of objects, the following is used:

Methods of heuristic forecasting (peer review);

Meloly forecasting by statistical models;

Combined methods.

Methods heuristic forecasting based on statistical processing of independent estimates of the values ​​of expected reliability indicators developed object (individual forecasts) given by a group of qualified (experts) on the basis of information provided by them about the object, the conditions of its operation, the planned manufacturing technology and other data available at the time of the assessment. A survey of experts and statistical processing of individual forecasts of reliability indicators is carried out by methods generally accepted for expert evaluation of any quality indicators (for example, the Delphi method).

P ro n c o z i o n i o nstatistical models are based on extra- or interpolation of dependencies that describe the identified trends in changes in the reliability indicators of analogue objects, taking into account their design and technological features and other factors, information about which is not known for the object under development or can be obtained at the time of the assessment. Models for forecasting are built according to data on reliability indicators and parameters of analogous objects using known statistical methods (multivariate regression analysis, methods of statistical classification and pattern recognition).

Combined methods are based on the joint application of forecasting methods based on statistical models and heuristic methods for predicting the reliability, followed by comparison of the results. At the same time, heuristic methods are used to assess the possibility of extrapolation of statistical models and refine the forecast of reliability indicators based on them. The use of combined methods is advisable in cases where there is reason to expect qualitative changes in the level of occurrence of objects that are not reflected by the corresponding statistical models, or when the number of analogue objects is insufficient for the use of only statistical methods.

A random event that leads to a complete or partial loss of product performance is called a failure.

Failures according to the nature of the change in the parameters of the equipment until the moment of their occurrence are divided into gradual and sudden (catastrophic). Gradual failures are characterized by a fairly smooth temporal change in one or more parameters, sudden- their abrupt change. According to the frequency of occurrence, failures are one-time (failures) and intermittent.

crash- a one-time self-recovering failure, intermittent A failure is a repeated failure of the same nature.

Depending on the cause of occurrence, failures are divided into stable and self-eliminating. A persistent failure is eliminated by replacing the failed component, while a self-resolving failure disappears on its own, but may recur. A self-recovering failure may manifest as a fault or as an intermittent failure.

The occurrence of failures occurs both due to the internal properties of the equipment and due to external influences and is of a random nature. To quantify failures, probabilistic methods of the theory of random processes are used.

Reliability- the property of an object to continuously maintain a healthy state for some time. The ability of a product to continuously maintain the specified functions during the time specified in the technical documentation is characterized by the probability of failure-free operation, the failure rate and the mean time between failures. The reliability of a product (for example, a cell), in turn, is determined by the values ​​of the failure rate of the components λi included in its composition.

The reliability assessment theory methodologically makes it possible to see and "justify" the previously existing specific reliability assessment models, in particular components, as well as to foresee the degree of their completeness, sufficiency and adequacy for solving practical reliability problems.

Component failure researchers used the principle of causality and applied knowledge from physics, chemistry, thermodynamics and materials science to explain the degradation processes leading to failures. As a result, synthetic terms and concepts appeared - "failure mechanism", "activation energy of the degradation process", which form the basis of physical methods of analysis (physics of reliability, physics of aging, physics of failures), which form the basis for the development of models for assessing reliability indicators in order to predict the reliability of components. These models are widely used in practical work in the analysis and evaluation of the reliability of products, including MEA components, and are given in the official standards and catalogs of microcircuits, which are the main type of products of the element base of modern technical objects. Therefore, knowledge of these models is useful for proper engineering applications.

In order to give an idea of ​​the nature of degradation processes in products, we first show how the concepts of chemical equilibrium, statistical mechanics, and the theory of absolute reaction rates can be applied to a system consisting of many particles. This will make it possible to further introduce both the empirical model for estimating the rates of the Arrhenius reaction and the more general Eyring model.

Under failure mechanisms refers to the microscopic processes of change leading to product failure. The failure mechanism is a theoretical model designed to explain the external manifestations of product failure at the atomic and molecular levels. These external manifestations are determined by the type of failures and represent specific, physically measurable states of the product.

The failure mechanism model is usually highly idealized. However, it does predict dependencies, leading to a better understanding of the phenomenon under consideration, although the quantitative results depend on the specific components, composition, and configuration of the product.

Failure mechanisms can be physical and/or chemical in nature. In practice, it is difficult to separate failure mechanisms. Therefore, in the process of analysis, a complex set of mechanisms is often considered as a single generalized failure mechanism. As a rule, of particular interest is one of a number of mechanisms operating simultaneously, which determines the rate of the degradation process and itself develops most rapidly.

Failure mechanisms can be represented either by continuous functions of time, which usually characterize the processes of aging and wear, or by jump functions, which reflect the presence of many undetected defects or qualitatively weak points.

The first group of mechanisms is caused by subtle defects that lead to the drift of the parameters of the components beyond the limits of tolerances, and is typical for most components; the second group of mechanisms manifests itself in a small number of components and is due to gross defects, which are eliminated through technological rejection tests (TOI).

Even the simplest component of a product (including IMNE) is a multi-component heterogeneous system, multi-phase, with boundary regions between phases. To describe such a system, either the phenomenological or molecular-kinetic approach is used.

Phenomenological approach- purely empirical, describing the state of the system on the basis of measurable macroscopic parameters. For example, for a transistor, according to the results of measuring the drift in time of the leakage current and breakdown voltage at certain points in time, the relationship of these parameters is established, on the basis of which the properties and states of the transistor as a system are predicted. However, these parameters are averaged over many microscopic characteristics, which reduces their sensitivity as indicators of degradation mechanisms.

Molecular kinetic approach predominantly connects the macroscopic properties of the system with the description of its molecular structure. In a system of many particles (atoms and molecules), their movements can be described on the basis of the laws of classical and quantum mechanics. However, due to the need to take into account a large number of interacting particles, the problem is very voluminous and difficult to solve. Therefore, the molecular-kinetic approach also remains purely empirical.

Interest in the degradation kinetics of components leads to an analysis of how the transformations (transitions) of one equilibrium state into another proceed, taking into account the nature and rate of transformations. There are some difficulties with this analysis.

The operation of the components depends mainly on such irreversible phenomena as electrical and thermal conductivity, i.e. is determined by nonequilibrium processes, to study the dependence of which one has to resort to approximation methods, since the components are multicomponent systems consisting of a number of phases of matter. The presence of many nonequilibrium factors can certain conditions influence the nature and rate of change in the equilibrium states of the system. Therefore, it is necessary to take into account not only combinations of mechanisms that can change depending on time and load, but also changes in time of the mechanisms themselves.

Despite these difficulties, it is possible to formulate a general concept of consideration and analysis, based on the fact that in the technology of components, based on the control of their parameters and the results of a certain period of testing, it is customary to decide which of a given set of components are suitable for a particular application. The rejection process is carried out throughout the entire production cycle: from materials to testing of finished products.

Thus, it remains only to understand the mechanism of evolution of the finished component from the state of "good" to the state of "defective". Experience shows that such a transformation requires overcoming a certain energy barrier, shown schematically in Fig. 5.13.

Rice. 5.13.

R 1, p, p 2 energy levels characterizing the normal, activated and failure states of the system; E a is the activation energy; δ is the space of system instability; A, B, C are the interacting particles of the system

The minimum energy level required to transition from a state p 1 per state R, called activation energy E but a process that may be of a mechanical, thermal, chemical, electrical, magnetic or other nature. In semiconductor solid-state products, this is often thermal energy.

If state R 1 is the minimum possible energy level of this system, and the component corresponds to the "good" state, then the state R corresponds to the unstable equilibrium of the system, and the component corresponds to the pre-failure state; R 2 corresponds to the "failure" state of the component.

Consider the case where there is one failure mechanism. The state of the system (good or bad) can be characterized by a number of measurable macroscopic parameters. The change or drift of these parameters can be recorded as a function of time and load. However, it is necessary to make sure that the accepted group of macro parameters does not reflect a special case of the system microstate (bad or good). A sign of a particular case is the absence of two identical products from the point of view of their microstate. Then the degradation rate will be different for them, and the mechanisms themselves may turn out to be different in some given period of time, which means that technological screening tests (TOI) will be ineffective. If the microstates of the components are identical, the failure statistics after their tests will be identical.

Consider analysis of degradation processes. In a simple system consisting of many particles, let us consider a certain limited number of particles actively participating in the degradation process leading to degradation of the component parameters. In many cases, the degree of degradation is proportional to the number of activated particles.

For example, molecules may dissociate into their constituent atoms or ions. The rate of this process (chemical dissociation) will depend on the number of dissociating particles and on their average rate of passage through the energy barrier.

Assume that we have a measurable parameter P. Product properties or a certain function of the parameter f(P) varies in proportion to the rate of chemical dissociation of some substances that make up the materials of the product, and the dissociation itself is the main degradation mechanism leading to product failure. In this case, the rate of change of P or f(P) in time t can be expressed as follows:

where N a the number of particles that have reached an energy level sufficient to overcome the energy barrier; is the average velocity of activated particles moving through the barrier; is the barrier transparency coefficient (it is less than unity, since part of the active particles rolls back from the energy top of the barrier).

Definition task N a out of the total number of particles in the system can be solved under the following assumptions:

  • 1) only a small part of all particles of the system always has the energy necessary to activate the degradation process;
  • 2) there is an equilibrium between the number of activated particles and the number of other particles in the system, i.e. the rate of appearance (birth) of activated particles is equal to the rate of their disappearance (death):

Problems of the type under consideration are the subject of research in statistical mechanics and are associated with the statistics of Maxwell-Boltzmann, Fermi-Dirac, Bose-Einstein.

If apply classical Maxwell statisticsBoltzmann, used as a satisfactory approximation for particles of all types (all particles are distinguishable), the number of particles that will be at the same energy level in an equilibrium system of many particles is described as follows:

where E a activation energy; k is the Boltzmann constant; T is the absolute temperature.

In the course of many years of empirical studies of the kinetics of reactions, it was found that in most chemical reactions and some physical processes there is a similar dependence of their reaction rate on temperature and loss

(decreasing) initial concentration of the substance WITH, those.

In other words, the Arrhenius equation is valid for thermally activated chemical reactions. We write it with allowance for quantum mechanical corrections:

where A - coefficient of proportionality.

Most accelerated component testing is based on the use of the Arrhenius equation, which is widely used, although often not with the required accuracy, to analyze the degradation processes of products and predict their reliability.

With regard to electronic products, its earliest use was in the study of violations (malfunctions) of electrical insulation.

Factor A should be calculated taking into account:

  • the average speed of overcoming the energy barrier by particles;
  • the total number of available (participating in the process) particles;
  • energy distribution functions of particles in the system.

where f* and f n are the distribution functions of activated and normal particles; δ is the length of the reaction path; WITH n is the concentration of normal particles.

Taking into account the translational, rotational and vibrational energies of particles, the last expression is written in a form suitable for use in failure physics:

where ; k- Boltzmann's constant; h- constant

Plank; T- temperature; are, respectively, the activation energy, the standard Gibbs activation energy, the entropy and enthalpy of activation, and the universal gas constant.

The importance of reducing entropy in a system consisting of many particles lies in slowing down the rate of degradation of the product parameter due to an increase in the order of the system. This means an increase in the time between failures, which can be shown by integrating the last equations:

Expression for the time the component reaches the failure state t f from the nominally permissible value of the electrical parameter P0 to the failure Pf after integration, substitution of limits and logarithm will take the form

where ; coefficient A" is determined during reliability testing and reflects the pre-failure (i.e., energetically activated) state of the component.

If under time t f to understand the mean time between failures, then for the exponential distribution law, the failure rate λ can be determined as follows:

The considered approach makes it possible to draw only qualitative and semi-quantitative conclusions in the theoretical analysis of the reliability of components, both due to the multiphase nature and heterogeneity of the multicomponent supersystem, of which the component (and even the element of the component) is a part, and due to the type of temporal experimental models of component degradation. This is obvious from the summary of the causes, mechanisms and physical and mathematical models of failures of IC components, presented in Table. 5.20 (time models do not always follow a logarithmic relationship; in practice there may be power-law relationships).

The advantage of the approach based on the use of the Arrhenius equation lies in the possibility of predicting parametric failures of products based on accelerated tests. The disadvantage of this approach is the lack of consideration of the design and technological parameters of elements and components.

Thus, the Arrhenius approach is based on the empirical relationship of the electrical parameter of the component or element and the failure mechanism with the activation energy Ea. This disadvantage was overcome by the theory of G. Eyring, who introduced the concept of an activated complex of particles and found its justification by the methods of statistical and quantum mechanics. However, his theory does not take into account the achievements of the Russian thermodynamic school of materials scientists, who creatively reworked the ideas of D. Gibbs.

However Arrhenius–Eyring approachGibbs is actively used to solve reliability issues under the assumption of temperature dependence of failure mechanisms and is the basis of various models that serve to find the failure rates of IEP, given in reference literature, manuals and databases of programs for calculating reliability indicators.

Eyring's theory does not take into account the achievements of the Russian thermodynamic school of materials scientists, who creatively mastered and reworked the ideas of D. Gibbs, not very revered in America, but loved in Russia and in the vast former USSR. It is known, for example, that V. K. Semenchenko, based on generalized functions associated with the Pfaff equations (1815 - the so-called Pfaff form), proposed his own approach (his C-model) and modified the fundamental D. Gibbs equations.

Table 5.20

Causes, characteristic mechanisms and failure models of components and their elements

Reliability parameter (indicator)

Cause (mechanism) of failures

Failure Model

Activation energy value E a, eV

Physico-chemical system

Spontaneous exit time from the steady state τ

Degradation processes

Sealing coatings (polymers)

MTBF tr

Destruction (processes of sorption, desorption, migration)

/7-type semiconductor surface

Surface ion concentration n s

Inversion, electromigration

Aluminum massive (volumetric)

MTBF t f

Thermomechanical stresses

Metallization (film)

MTBF t f

Electromigration, oxidation, corrosion, electrocorrosion

Interconnections

Contact resistance R

Formation of intermetallic compounds

Resistors

Contact resistance R

Oxidation

Capacitors

Capacity WITH

Diffusion, oxidation

Micromechanical accelerometer (MMA)

Sensing element of the converter of mechanical deformation into acceleration

microcreep

1,5-2

* Data taken from the book: VLSI Technology. In 2 books. Book. 2 / K. Mogab [and others]; per. from English; ed. S. Zee. M.: Mir, 1986. S. 431.

It should be noted that D. Gibbs provided a visionary impetus for the development of his ideas himself. As it was said in the preface to the "Principles ...", he "recognizes the inferiority of any theory" that does not take into account the properties of substances, the presence of radiation and other electrical phenomena.

The fundamental equation of matter according to Gibbs (taking into account thermal, mechanical and chemical properties) has the form of a total differential:

or, which is the same, for the convenience of visual analysis:

here Gibbs uses the following designations: ε – energy; t- temperature; η is the entropy; R - pressure; V- volume; μ, is the chemical potential; m i is the mole fraction of the /th component ( i= 1, ..., P).

Semenchenko, using the method of generalized functions (Pfaffian forms), introduced into the G-model the strength of the electric ( E) and magnetic (R) fields, as well as their corresponding "coordinates" - electric ( R) and magnetic ( M) polarization, modified the G-model to the form

The step-by-step procedure for applying the simplest model - Arrhenius - to analyze test data to determine the temperature dependence of the degradation processes of components looks like this:

In connection with the foregoing, it is important to make comments about the concept of reliability adopted by the firm Motorola for semiconductor diodes, transistors and ICs.

As you know, reliability is the probability that the IS will be able to successfully perform its functions under given operating conditions for a certain period of time. This is the classic definition.

Other definition of reliability is related to quality. Since quality is a measure of variability, i.e. variability, up to potential, hidden nonconformity or failure in a representative sample, then reliability is a measure of variability over time under operating conditions. Therefore, reliability is a quality deployed over time under operating conditions.

Finally, the reliability of products (products, including components) is a function of a correct understanding of customer requirements and the implementation or implementation of these requirements in the design, manufacturing technology and operation of products and their constructs.

Method QFD (quality function deployment) is a technology for deploying quality functions, structuring a quality function (which means product design, in which customer requests are first identified, then specifications products and manufacturing processes that best meet identified needs, resulting in more high quality products). Method QFD useful for establishing and identifying quality and reliability requirements with a view to their implementation in innovative projects.

The number of observed failures over the total number of hours at the end of the observation period is called the failure rate point estimate. This estimate is obtained from observations of a sample of, for example, the tested ICs. The assessment of the failure rate is performed using the χ2-distribution:

where λ* is the failure rate; a– confidence level of significance; v = 2r 2 is the number of degrees of freedom; r– number of failures; P- number of products; t- test duration.

Example 5.6

Calculate the values ​​of the function χ2 for the 90% confidence level.

Solution

The calculation results are given in Table. 5.21.

Table 5.21

The calculated values ​​of the function χ 2 for 90% confidence level

In order to increase the reliability of the confidence level of the assessment of the operating time required today by the firm Motorola an approach based on the determination of the component failure rate in the form of the Eyring equation is used:

where A, B, C - coefficients determined by test results; T- temperature; RH– relative humidity; E is the electric field strength.

Thus, the presented material indicates that in the context of a fairly wide use of foreign electronic products with unknown reliability indicators, it is possible to recommend the use of the methods and models presented in this chapter to determine and predict the reliability indicators of components and systems: for components - using physical representations based on the equations of Arrhenius, Eyring, Semenchenko, Gibbs; for systems - using combinatorial analysis (parallel, sequential and hierarchical types).

  • The term "Valley" used in the figure is a term in physical chemistry(not officially defined), used in particle state diagrams for particles that have lowered their energy, "fell" from the top into the valley (by analogy with mountaineering), overcame the energy barrier and lost energy after doing work, i.e. who made the transition to a lower energy level, characterized by a lower Gibbs energy, which is a consequence of the implementation of the principle of minimum energy, described in thermodynamic potentials and introduced into science (for example, into theoretical physics) by D. Gibbs himself.
  • Gibbs JW Basic principles of statistical mechanics, developed with a special application to the rational substantiation of thermodynamics // Gibbs JW Thermodynamics. Statistical mechanics: Per. from English; ed. B. M. Zubarev; comp. W. I. Frankfurt, A. I. Frank (series "Classics of Science"). M.: Nauka, 1982. S. 352-353.

Forecasting the reliability of a technical object is scientific direction who studies methods of prediction technical condition object under the influence of specified factors.

Forecasting is used to determine the residual life of systems, their technical condition, the number of repairs and technical services, consumption of spare parts and solving other problems in the field of reliability.

Reliability indicators can be predicted using various parameters (for example, fatigue strength, wear process dynamics, vibroacoustic parameters, content of wear elements in oil, cost and labor costs, etc.).

Modern methods forecasting subdivided into three main groups.

1. Methods of expert assessments, the essence of which is reduced to generalization, statistical processing and analysis of the opinions of specialists. The latter substantiate their point of view using information about similar objects and analyzing the state of specific objects.

2. Modeling methods based on the main provisions of the theory of similarity. These methods consist in forming a model of the object of study, conducting experimental studies of the model, and recalculating the obtained values ​​from the model to a natural object. For example, by conducting accelerated tests, the durability of the product is first determined under forced (hard) operating conditions, and then, using the appropriate formulas and graphs, the durability is determined in real operating conditions.

3. Statistical Methods, of which the extrapolation method finds the greatest application. It is based on the patterns of change in predicted parameters over time. To describe these regularities, a simple analytical function with a minimum number of variables is selected, if possible.

So, by statistical processing, a parameter is determined that serves as a diagnostic sign of the technical condition of the engine, for example, a breakthrough of crankcase gases or oil consumption. This parameter is used to predict the remaining resource. In this case, it should be taken into account that the actual resource may fluctuate around the obtained value.

The main reasons for inaccurate forecasting are insufficient completeness, reliability and homogeneity of information ( homogeneous called information about the same products, operated in the same conditions), low qualification forecaster.

Forecasting efficiency is determined by the change in the reliability indicator as a result of the introduction of the recommended means of improving it.

Materials of practical classes No. 6 and 7.

Reliability prediction.

Reliability prediction. Reliability prediction based on preliminary information. Use of indirect signs of failure prediction. Individual reliability prediction. Individual reliability prediction using the pattern recognition method (Test procedure. The procedure for training the recognition function. The procedure for predicting product quality. An example of a method for individual product quality prediction.).

PZ.6-7.1. Reliability prediction.

In accordance with the current GOSTs in terms of reference for designed products (objects) are recorded experimental confirmation requirements a given level of reliability, taking into account the existing loads.

For highly reliable objects (for example, space technology), this requirement is overly rigid(in the sense of the need to test a large number of objects of the same type) and not always practically feasible. Indeed, in order to confirm the probability of failure-free operation P = 0.999 with a 95% confidence level, 2996 successful tests should be carried out. If at least one test is unsuccessful, then the number of required tests will increase even more. To this should be added a very long duration of testing, since many objects must combine a high level of reliability with a long operating time (resource). Hence follows important requirement: when assessing reliability, it is necessary to take into account all the accumulated preliminary information about the reliability of technical objects.

Reliability and failure prediction is the prediction of expected reliability and the probability of future failures based on information obtained in the past or on the basis of indirect predictive features.

Reliability calculation at the product design stage bears the features of such forecasting, since an attempt is made to foresee the future state of the product, which is still at the development stage.

Some of the tests discussed above contain elements of predicting the reliability of a batch of products from the reliability of their sample, for example, according to the test schedule. These prediction methods are based on the study of statistical patterns of failures.

But it is possible to predict reliability and failures based on the study of the factors causing the occurrence of failures. In this case, along with statistical regularities, physical and chemical factors that affect reliability are also considered, which complicates its analysis, but allows to reduce its duration and makes it more informative.

PZ.6-7.2. Reliability prediction based on preliminary information.

When evaluating reliability, it is necessary to take into account all the accumulated preliminary information about the reliability of technical objects. for instance, it is important to combine the calculated information obtained at the stage of preliminary design with the results of testing the object. In addition, the tests themselves are also very diverse and are carried out at different stages of the creation of an object and at various levels of its assembly (elements, blocks, units, subsystems, systems). Accounting for information characterizing the change in reliability in the process of improving the object, can significantly reduce the number of tests required for experimental confirmation of the achieved level of reliability.

In the process of creating technical objects, tests are carried out. Based on the analysis of the results of these tests, changes are made to the design aimed at improving their characteristics. Therefore, it is important to assess how effective these measures turned out to be and whether the reliability indicators of the facility have really improved after the changes were made. Such an analysis can be performed using the methods of mathematical statistics and mathematical models of the change in reliability.

If the probability of some event in a single experiment is equal to R and at n independent experiments this event (failure) occurred m times, then the confidence limits for p found as follows:

Case 1 Let m¹ 0 , then:

(PZ.6-7.2.)

where coefficients R1 and R2 are taken from the relevant statistical tables.

Case 2. Let m=0, then r n=0, and the upper bound is equal to

. (PZ.6-7.3.)

Payment R0 produced according to the equation

(PZ.6-7.4.)

One-sided confidence probabilities g 1 and g 2 are related to the two-sided confidence probability γ * known dependence

(PZ.6-7.5.)

Bench, ground tests provide basic information about the reliability of the object. Based on the results of these tests, reliability indicators. If technical product is a complex system, and the reliability of some elements is determined experimentally, and some by calculation, then to predict the reliability of a complex system, method of equivalent particulars.

During flight tests receive additional information about the reliability of the object and this information should be used to clarify and correct the reliability indicators obtained during bench tests. Let it be necessary to clarify lower bound the probability of failure-free operation of an object that has passed bench ground tests and flight tests, and at the same time m=0.

Loading...