Ideas.  Interesting.  Public catering.  Production.  Management.  Agriculture

Reliability prediction. Prediction of reliability indicators of on-board equipment of spacecraft when exposed to low-intensity ionizing radiation. using block diagrams

To assess the approximation of the empirical distribution to the theoretical one, the Romanovsky criterion of agreement is used, which is determined by the formula:

where is the Pearson criterion;

r is the number of degrees of freedom.

If the condition is met, then this gives grounds for the statement that it is possible to accept the theoretical distribution of reliability indicators as the law of this distribution.

The Kolmogorov criterion allows us to evaluate the validity of the hypothesis about the distribution law for small volumes of observations of a random variable

where D is the maximum difference between the actual and theoretical accumulated frequencies of the random variable.

Based on special tables, the probability P is determined that if a specific variational characteristic is distributed according to the theoretical distribution under consideration, then, due to purely random reasons, the maximum discrepancy between the actual and theoretical accumulated frequencies will be no less than what is actually observed.

Based on the calculated value P, the following conclusions are drawn:

a) if the probability P is sufficiently high, then the hypothesis that the actual distribution is close to the theoretical one can be considered confirmed;

b) if the probability P is small, then the hypothesis is rejected.

The boundaries of the critical region for the Kolmogorov criterion depend on the sample size: the smaller the number of observation results, the higher the critical probability value must be set.

If the number of failures during observation was 10-15, then if more than 100, then . However, it should be noted that for large volumes of observations it is better to use the Pearson criterion.

The Kolmogorov criterion is much simpler than other goodness-of-fit criteria, so it is widely used in studying the reliability of machines and elements.

Question 22. The main tasks of predicting machine reliability.

To determine patterns of changes in the technical condition of a machine during operation, machine reliability is predicted.

There are three stages of forecasting: retrospection, diagnosis and prognosis. At the first stage, the dynamics of changes in machine parameters in the past are established, at the second stage the technical state of elements in the present is determined, at the third stage changes in the parameters of the state of elements in the future are predicted.

The main classes of machine reliability prediction problems can be formulated as follows:

    Predicting patterns of changes in machine reliability in connection with prospects for production development, the introduction of new materials, and increasing the strength of parts.

    Assessing the reliability of the designed machine before it is manufactured. This task arises at the design stage.

    Predicting the reliability of a specific machine (component, unit) based on the results of changes in its parameters.

    Predicting the reliability of a certain set of machines based on the results of studying a limited number of prototypes. Problems of this type have to be faced at the production stage of equipment.

5. Predicting the reliability of machines under unusual operating conditions (for example, temperature and humidity environment higher than permissible).

The specifics of the construction machinery industry require the accuracy of solving forecasting problems with an error of no more than 10-15% and the use of forecasting methods that allow obtaining solutions to problems in the shortest possible time.

Methods for predicting machine reliability are selected taking into account forecasting tasks, the quantity and quality of initial information, and the nature of the real process of changing the reliability indicator (predicted parameter).

Modern forecasting methods can be divided into three main groups:

Methods of expert assessments;

Modeling methods, including physical, physical-mathematical and information models;

Statistical methods.

Forecasting methods based on expert assessments consist of generalization, statistical processing and analysis of specialist opinions regarding the prospects for the development of this area.

Modeling methods are based on the basic principles of similarity theory. Based on the similarity of the indicators of modification A, the reliability level of which was studied earlier, and some properties of modification B of the same machine, the reliability indicators of B are predicted for a certain period of time.

Statistical forecasting methods are based on extrapolation and interpolation of predicted reliability parameters obtained as a result of preliminary studies. The method is based on the patterns of changes in machine reliability parameters over time.

Question 23. Stages of predicting machine reliability.

When predicting machine reliability, the following sequence is followed:

    Classify parts and assembly units according to the principle of responsibility. To parts and assembly units, the failures of which are dangerous for people's lives, set higher reliability requirements.

    Formulate the concepts of failure of parts and assembly units of the designed system. In this case, it is necessary to take into account only those parts and assembly units whose failure leads to a complete or partial loss of system functionality.

3. Select a method for predicting reliability depending on the stage of system design, the accuracy of the initial data and the assumptions made.

    A structural diagram of the product is drawn up, which includes the main functional parts and assembly units, including parts and assembly units of power and kinematic chains, arranged by levels in the order of their subordination, and reflecting the connections between them.

    All parts and assembly units are considered, starting from the top level of the structural diagram and ending with the bottom, dividing them into the following groups:

a) parts and assembly units, the indicators of which should be determined by calculation methods;

b) parts and assembly units with specified reliability indicators, including assigned failure flow parameters;

c) parts and assembly units, the reliability indicators of which should be determined by experimental statistical methods or test methods.

6. For details and assembly units, the reliability of which is determined by calculation methods:

Load spectra and other operating features are determined, for which functional models of the product and its assembly units are drawn up, which, for example, can be represented by a state matrix;

Compile models of physical processes leading to failures,

Criteria for failures and limit states are established (destruction from short-term overloads, the onset of extreme wear, etc.).

They are classified into groups according to failure criteria and appropriate calculation methods are selected for each group.

7. If necessary, construct graphs of the dependence of reliability indicators on time, on the basis of which the reliability of individual parts and assembly units, as well as various options for structural diagrams of the system, are compared.

8. Based on the reliability prediction, a conclusion is made about the suitability of the system for its intended use. If the calculated reliability is lower than the specified one, measures are developed aimed at increasing the reliability of the calculated system.

Question 24. Predicting machine reliability

As noted above according to the basic principles of calculation properties that make up reliability, or complex indicators of reliability of objects are distinguished:

Forecasting methods

Structural calculation methods,

Physical calculation methods,

Methods forecasting are based on the use of data on achieved values ​​and identified trends in changes in reliability indicators of analogue objects to assess the expected level of reliability of an object. ( Analogue objects – These are objects similar or close to the one being considered in terms of purpose, operating principles, circuit design and manufacturing technology, element base and materials used, operating conditions and modes, principles and methods of reliability management).

Structural methods calculation are based on the representation of an object in the form of a logical (structural-functional) diagram that describes the dependence of the states and transitions of the object on the states and transitions of its elements, taking into account their interaction and the functions they perform in the object, with subsequent descriptions of the constructed structural model with an adequate mathematical model and the calculation of reliability indicators of the object according to the known reliability characteristics of its elements.

Physical methods calculation are based on the use of mathematical models, describe their physical, chemical and other processes leading to failures of objects (to objects reaching a limit state), and calculation of reliability indicators based on known parameters (object load, characteristics of substances and materials used in the object, taking into account the features of its design and manufacturing technologies.

Methods for calculating the reliability of a particular object are selected depending on: - the purposes of the calculation and the accuracy requirements for determining the reliability indicators of the object;

Availability and/or possibility of obtaining the initial information necessary to apply a certain calculation method;

The level of sophistication of the design and manufacturing technology of the object, its maintenance and repair system, allowing the use of appropriate reliability calculation models. When calculating the reliability of specific objects, it is possible to simultaneously use various methods, for example, methods for predicting the reliability of electronic and electrical elements with the subsequent use of the results obtained as initial data for calculating the reliability of the object as a whole or its components using various structural methods.

4.2.1. Reliability prediction methods

Forecasting methods are used:

To justify the required level of reliability of objects when developing technical specifications and/or assessing the likelihood of achieving specified reliability indicators when developing technical proposals and analyzing the requirements of the technical specifications (contract);

For an approximate assessment of the expected level of reliability of objects at the early stages of their design, when there is no necessary information for the use of other methods of reliability calculation;

To calculate the failure rate of serially produced and new electronic and electrical components different types taking into account the level of load, manufacturing quality, areas of application of the equipment in which the elements are used;

To calculate the parameters of typical tasks and operations of maintenance and repair of objects, taking into account the structural characteristics of the object that determine its maintainability.

To predict the reliability of objects the following is used:

Methods of heuristic forecasting (expert assessment);

Melols of forecasting using statistical models;

Combined methods.

Methods heuristic forecasting are based on statistical processing of independent estimates of the values ​​of expected reliability indicators of the object being developed (and individual forecasts) given by a group of qualified (experts) based on the information provided to them about the object, conditions of its operation, planned production technology and other data available at the time of the assessment. A survey of experts and statistical processing of individual forecasts of reliability indicators are carried out using methods generally accepted for expert assessment of any quality indicators (for example, the Delphi method).

FORECASTING METHODSstatistical models are based on extra- or interpolation of dependencies that describe identified trends in changes in reliability indicators of analogue objects, taking into account their design and technological features and other factors, information about which is not available for the object being developed or can be obtained at the time of the assessment. Models for forecasting are built based on data on reliability indicators and parameters of analogue objects using well-known statistical methods (multivariate regression analysis, methods of statistical classification and pattern recognition).

Combined methods are based on the joint application of forecasting methods based on statistical models and heuristic methods to predict the reliability, followed by comparison of the results. In this case, heuristic methods are used to assess the possibility of extrapolation of statistical models and refine the forecast of reliability indicators based on them. The use of combined methods is advisable in cases where there is reason to expect qualitative changes in the level of reliability of objects that are not reflected by the corresponding statistical models, or when the number of analogue objects is insufficient to apply only statistical methods.

A random event leading to the complete or partial loss of functionality of a product is called a failure.

Failures, based on the nature of changes in equipment parameters before their occurrence, are divided into gradual and sudden (catastrophic). Gradual failures are characterized by a fairly smooth temporary change in one or more parameters, sudden– their abrupt change. Based on the frequency of occurrence, failures can be one-time (failures) or intermittent.

Crash– a one-time self-correcting failure, intermittent Failure is a failure of the same nature that occurs multiple times.

Depending on the cause of occurrence, failures are divided into stable and self-correcting. A persistent failure is eliminated by replacing the failed component, while a self-resolving failure disappears on its own, but may recur. A self-correcting failure may appear as a crash or as an intermittent failure.

Failures occur both due to the internal properties of the equipment and due to external influences and are random in nature. To quantify failures, probabilistic methods of the theory of random processes are used.

Reliability– the property of an object to continuously maintain an operational state for some time. The ability of a product to continuously maintain specified functions for the time specified in the technical documentation is characterized by the probability of failure-free operation, failure rate and average time between failures. The failure-free operation of a product (for example, a cell) in turn is determined by the failure rates of the components λi included in its composition.

The theory of reliability assessment methodologically allows us to see and “justify” previously existing specific models for assessing reliability, in particular components, and also to foresee the degree of their completeness, sufficiency and adequacy for solving practical reliability problems.

Component failure researchers have used the principle of causality and applied knowledge from physics, chemistry, thermodynamics, and materials science to explain the degradation processes that lead to failure. As a result, synthetic terms and concepts appeared - “failure mechanism”, “activation energy of the degradation process”, which form the basis of physical methods of analysis (reliability physics, aging physics, failure physics), which form the basis for the development of models for assessing reliability indicators in order to predict the reliability of components. Such models are widely used in practical work when analyzing and assessing the reliability of products, including MEA components, and are given in official standards and catalogs of microcircuits, which are the main type of element base products of modern technical objects. Therefore, knowledge of these models is useful for proper engineering application.

To give an idea of ​​the nature of degradation processes in products, we first show how the concepts of chemical equilibrium, statistical mechanics and the theory of absolute reaction rates can be applied to a system consisting of many particles. This will further allow us to introduce both the empirical Arrhenius model for estimating reaction rates and the more general Eyring model.

Under failure mechanisms understands the microscopic change processes leading to product failure. The failure mechanism is a theoretical model designed to explain the external manifestations of product failure at the atomic and molecular levels. These external manifestations are determined by the type of failure and represent specific, physically measurable states of the product.

The failure mechanism model is usually highly idealized. It does, however, predict interdependencies that lead to a better understanding of the phenomenon under consideration, although the quantitative results depend on the specific components, composition and configuration of the product.

Failure mechanisms may be physical and/or chemical in nature. In practice, it is difficult to separate failure mechanisms. Therefore, during the analysis process, a complex series of mechanisms is often considered as a single generalized failure mechanism. As a rule, of particular interest is one mechanism among a number of mechanisms acting simultaneously, which determines the rate of the degradation process and itself develops most quickly.

Failure mechanisms can be represented either by continuous functions of time, which usually characterize the processes of aging and wear, or by discontinuous functions, reflecting the presence of many undetected defects or qualitative weaknesses.

The first group of mechanisms is caused by subtle defects that lead to component parameters drifting beyond tolerances, and is typical for most components; the second group of mechanisms manifests itself in a small number of components and is caused by gross defects, which are eliminated through technological rejection tests (TRT).

Even the simplest component of a product (including IMNE) is a multicomponent heterogeneous system, multiphase, having boundary areas between phases. To describe such a system, either a phenomenological or a molecular kinetic approach is used.

Phenomenological approach– purely empirical, describing the state of the system based on measurable macroscopic parameters. For example, for a transistor, based on the results of measuring the time drift of the leakage current and breakdown voltage at certain points in time, the relationship between these parameters is established, on the basis of which the properties and states of the transistor as a system are predicted. However, these parameters are averaged over many microscopic characteristics, which reduces their sensitivity as indicators of degradation mechanisms.

Molecular kinetic approach primarily relates the macroscopic properties of a system to a description of its molecular structure. In a system of many particles (atoms and molecules), their movements can be described based on the laws of classical and quantum mechanics. However, due to the need to take into account a large number of interacting particles, the problem is very voluminous and difficult to solve. Therefore, the molecular kinetic approach also remains purely empirical.

Interest in the kinetics of degradation of components leads to an analysis of how transformations (transitions) from one equilibrium state to another occur, taking into account the nature and rate of transformations. There are some difficulties with such an analysis.

The operation of components depends mainly on irreversible phenomena such as electrical and thermal conductivity, i.e. is determined by nonequilibrium processes, to study the dependence of which it is necessary to resort to approximation methods, since the components are multicomponent systems consisting of a number of phases of matter. The presence of many nonequilibrium factors can certain conditions influence the nature and rate of change in the equilibrium states of the system. Therefore, it is necessary to take into account not only combinations of mechanisms that can change depending on time and load, but also changes in time of the mechanisms themselves.

Despite these difficulties, it is possible to formulate a general concept of consideration and analysis, based on the fact that in component technology, based on monitoring of their parameters and the results of a certain period of testing, it is customary to decide which of a given set of components are suitable for a particular application. The rejection process is carried out throughout the entire production cycle: from materials to testing of finished products.

Thus, all that remains is to understand the mechanism of evolution of the finished component from the “good” state to the “reject” state. Experience shows that such a transformation requires overcoming a certain energy barrier, schematically shown in Fig. 5.13.

Rice. 5.13.

R 1, r, r 2 energy levels characterizing the normal, activated and failure states of the system; E a – activation energy; δ – space of instability of the system; A, B, C– interacting particles of the system

The minimum energy level required to transition from a state p 1 in state R, called activation energy E a process that can be of a mechanical, thermal, chemical, electrical, magnetic or other nature. In semiconductor solid-state products, this is often thermal energy.

If the condition R 1 is the minimum possible energy level of this system, and the component corresponds to the “go” state, then the state R corresponds to an unstable equilibrium of the system, and the component corresponds to a pre-failure state; R 2 corresponds to the "failure" state of the component.

Let's consider the case where there is one failure mechanism. The state of a system (good or bad) can be characterized by a number of measurable macroscopic parameters. The change, or drift, of these parameters can be recorded as a function of time and load. However, it is necessary to make sure that the adopted group of macroparameters does not reflect a special case of the microstate of the system (good or bad). A sign of a special case is the absence of two identical products from the point of view of their microstate. Then the rate of degradation will not be the same for them, and the mechanisms themselves may turn out to be different at any given period of time, which means that technological rejection tests (TRTs) will be ineffective. If the microstates of the components are identical, the failure statistics after testing will be identical.

Let's consider analysis of degradation processes. In a simple system consisting of many particles, consider a certain limited number of particles actively participating in the degradation process leading to degradation of the component parameters. In many cases, the degree of degradation is proportional to the number of activated particles.

For example, dissociation of molecules into their constituent atoms or ions may occur. The rate of this process (chemical dissociation) will depend on the number of dissociating particles and on their average speed of passage through the energy barrier.

Let us assume that we have a measurable parameter P. Properties of the product or a certain function of the parameter f(P) changes in proportion to the rate of chemical dissociation of some substances that make up the materials of the product, and dissociation itself is the main degradation mechanism leading to product failure. In this case, the rate of change P or f(P) in time t can be expressed as follows:

Where N a the number of particles that have reached an energy level sufficient to overcome the energy barrier; – the average speed of movement of activated particles through the barrier; – the transparency coefficient of the barrier (it is less than unity, since some of the active particles roll back from the energy top of the barrier).

Definition task N a from the total number of particles in the system can be solved under the following assumptions:

  • 1) only a small part of all particles of the system always has the energy necessary to activate the degradation process;
  • 2) there is a balance between the number of activated particles and the number of remaining particles in the system, i.e. the rate of emergence (birth) of activated particles is equal to the rate of their disappearance (death):

Problems of the type under consideration are the subject of study of statistical mechanics and are associated with the statistics of Maxwell - Boltzmann, Fermi - Dirac, Bose - Einstein.

If you apply classical Maxwell statisticsBoltzmann, used as a satisfactory approximation for particles of all types (all particles are distinguishable), then the number of particles that will be at the same energy level in an equilibrium system of many particles is described as follows:

Where E a activation energy; k– Boltzmann constant; T– absolute temperature.

In the process of many years of research into reaction kinetics, it was empirically established that in most chemical reactions and some physical processes there is a similar dependence of their reaction rate on temperature and loss

(decrease) of the initial concentration of the substance WITH, those.

In other words, the Arrhenius equation is valid for thermally activated chemical reactions. Let's write it taking into account quantum mechanical corrections:

Where A - proportionality factor.

Most accelerated testing of components is based on the use of the Arrhenius equation, which is widely used, although often not providing the necessary accuracy, to analyze the degradation processes of products and predict their reliability.

In relation to electronics products, its earliest use was in the study of electrical insulation faults.

Factor A must be calculated taking into account:

  • average speed of particles overcoming the energy barrier;
  • the total number of particles present (participating in the process);
  • functions of particle energy distribution in the system.

Where f* And f n – distribution functions of activated and normal particles; δ – reaction path length; WITH n – concentration of normal particles.

Taking into account the translational, rotational and vibrational energies of particles, the last expression is written in a form suitable for use in failure physics:

Where ; k – Boltzmann's constant; h – constant

Plank; T- temperature; – activation energy, standard Gibbs activation energy, entropy and enthalpy of activation, universal gas constant, respectively.

The importance of reducing entropy in a system consisting of many particles lies in slowing down the rate of degradation of the product parameter due to the increasing order of the system. This means an increase in MTBF, which can be shown by integrating the last equations:

Expression for the time it takes for a component to reach a failure state t f from the nominal permissible value of the electrical parameter P0 to the failure value Pf after integration, substitution of limits and logarithm will take the form

Where ; coefficient A" is determined during reliability testing and reflects the pre-failure (i.e., energetically activated) state of the component.

If under time t f to understand the mean time between failures, then for the exponential distribution law the failure rate λ can be determined as follows:

The considered approach allows us to make only qualitative and semi-quantitative conclusions when theoretically analyzing the reliability of components, both due to the multiphase and heterogeneity of the multicomponent supersystem, of which the component (and even an element of the component) is a part, and because of the type of temporary experimental models of component degradation. This is obvious from the summary of causes, mechanisms and physical and mathematical models of failures of IS components presented in Table. 5.20 (time models do not always follow a logarithmic relationship; in practice, there may be power-law relationships).

The advantage of the approach based on the use of the Arrhenius equation is the ability to predict parametric failures of products based on accelerated tests. The disadvantage of this approach is the lack of consideration of the design and technological parameters of elements and components.

Thus, the Arrhenius approach is based on the empirical connection between the electrical parameter of a component or element and the failure mechanism with the activation energy Ea. This drawback was overcome by the theory of G. Eyring, who introduced the concept of an activated complex of particles and found its justification using the methods of statistical and quantum mechanics. However, his theory does not take into account the achievements of the Russian thermodynamic school of materials scientists, who creatively reworked the ideas of D. Gibbs.

Nevertheless Arrhenius–Eyring approachGibbs is actively used to solve reliability issues under the assumption of temperature dependence of failure mechanisms and is the basis of various models used to find failure rates of electrical equipment given in reference literature, manuals and databases of programs for calculating reliability indicators.

Eyring's theory does not take into account the achievements of the Russian thermodynamic school of materials scientists, who creatively mastered and reworked the ideas of D. Gibbs, not very revered in America, but loved in Russia and beyond former USSR. It is known, for example, that V.K. Semenchenko, based on generalized functions associated with Pfaff’s equations (1815 - the so-called Pfaffian form), proposed his approach (his C-model) and modified the fundamental equation of D. Gibbs.

Table 5.20

Causes, characteristic mechanisms and failure models of components and their elements

Reliability parameter (indicator)

Cause (mechanism) of failures

Failure model

Activation energy value E a, eV

Physico-chemical system

Time of spontaneous exit from a stable state τ

Degradation processes

Sealing coatings (polymers)

Mean time between failures tr

Destruction (processes of sorption, desorption, migration)

/7-type semiconductor surface

Surface ion concentration n s

Inversion, electromigration

Solid aluminum (volumetric)

Mean time between failures t f

Thermomechanical stresses

Metallization (film)

Mean time between failures t f

Electromigration, oxidation, corrosion, electrocorrosion

Interconnections

Contact resistance R

Formation of intermetallic compounds

Resistors

Contact resistance R

Oxidation

Capacitors

Capacity WITH

Diffusion, oxidation

Micromechanical accelerometer (MMA)

Sensing element of the mechanical deformation to acceleration converter

Microcreep

1,5-2

* Data taken from the book: VLSI technology. In 2 books. Book 2 / K. Mogeb [et al.]; lane from English; edited by S. Zee. M.: Mir, 1986. P. 431.

It should be noted that D. Gibbs prophetically pushed him to develop his ideas. As it was said in the preface to the “Principles...”, he “recognizes the inferiority of any theory” that does not take into account the properties of substances, the presence of radiation and other electrical phenomena.

Fundamental equation of matter according to Gibbs (taking into account thermal, mechanical and chemical properties) has the form of a complete differential:

or, what is the same, for the convenience of visual analysis:

here Gibbs uses the following notation: ε – energy; t – temperature; η – entropy; R - pressure; V- volume; μ, – chemical potential; m i is the mole fraction of the ith component ( i= 1, ..., P).

Semenchenko, using the method of generalized functions (Pfaffian forms), introduced the electric intensity into the G-model ( E) and magnetic (I) fields, as well as the corresponding “coordinates” - electric ( R) and magnetic ( M) polarization, modified the G-model to the form

The step-by-step procedure for applying the simplest model - Arrhenius - to analyze test data to determine the temperature dependence of component degradation processes looks like this:

In connection with the above, it is important to make comments about the concept of reliability adopted by the company Motorola for semiconductor diodes, transistors and ICs.

As is known, reliability is the probability that an IS will be able to successfully perform its functions under given operating conditions over a certain period of time. This is the classic definition.

Another definition of reliability is related to quality. Since quality is a measure of variability, i.e. variability, up to potential, hidden inconsistency or failure in a representative sample, then reliability is a measure of variability over time under operating conditions. Consequently, reliability is a quality developed over time under operating conditions.

Finally, the reliability of products (products, including components) is a function of correct understanding of customer requirements and the introduction or implementation of these requirements into the design, manufacturing technology and operation of products and their structures.

Method QFD (quality function deployment) is a technology for deploying quality functions, structuring the quality function (which means product design, in which consumer requests are first identified, then determined specifications products and manufacturing processes that best meet identified needs, resulting in more high quality products). Method QFD useful for establishing and identifying quality and reliability requirements for their implementation in innovative projects.

The number of observed failures over the total number of hours at the end of the observation period is called a point estimate of the failure rate. This estimate is obtained from observations of a sample, for example, IS subjects. Failure rate assessment is performed using the χ2 distribution:

where λ* – failure rate; A– confidence level of significance; v = 2r 2 – number of degrees of freedom; r– number of failures; P– number of products; t– test duration.

Example 5.6

Calculate the values ​​of the χ2 function for the 90% confidence level.

Solution

The calculation results are given in table. 5.21.

Table 5.21

Calculated values ​​of the function χ 2 for 90% confidence level

To increase the reliability of the confidence level of the assessment of the company’s required operating time today Motorola An approach is used based on determining the failure rate of components in the form of the Eyring equation:

Where A, B, C – coefficients determined based on test results; T- temperature; RH– relative humidity; E– electric field strength.

Thus, the presented material indicates that in conditions of fairly widespread use of foreign electronic products with unknown reliability indicators, it is possible to recommend the use of the methods and models presented in this chapter to determine and predict the reliability indicators of components and systems: for components - using physical concepts based on the equations of Arrhenius, Eyring, Semenchenko, Gibbs; for systems – using combinatorial analysis (parallel, sequential and hierarchical types).

  • The term "Valley" used in the figure is a term in physical chemistry(not officially defined), used in particle state diagrams for particles that have lowered their energy, “fell” from a peak into a valley (by analogy with mountaineering), overcome an energy barrier and lost energy after doing work, i.e. who have made a transition to a lower energy level, characterized by a lower Gibbs energy, which is a consequence of the implementation of the principle of minimum energy, described in thermodynamic potentials and introduced into science (for example, into theoretical physics) by D. Gibbs himself.
  • Gibbs J.W. Basic principles of statistical mechanics, developed with special application to the rational basis of thermodynamics // Gibbs J.W. Thermodynamics. Statistical mechanics: transl. from English; edited by B. M. Zubareva; comp. W. I. Frakfurt, A. I. Frank (series "Classics of Science"). M.: Nauka, 1982. pp. 352-353.

Forecasting the reliability of a technical object is scientific direction, studying prediction methods technical condition object when exposed to specified factors.

Forecasting is used to determine the remaining life of systems, their technical condition, the number of repairs and technical services, consumption of spare parts and solving other problems in the field of reliability.

Prediction of reliability indicators can be made using various parameters (for example, fatigue strength, dynamics of the wear process, vibroacoustic parameters, content of wear elements in oil, cost and labor costs, etc.).

Modern methods forecasting are divided into three main groups.

1. Methods of expert assessments, the essence of which boils down to generalization, statistical processing and analysis of the opinions of specialists. The latter justify their point of view using information about similar objects and analyzing the state of specific objects.

2. Modeling methods based on the basic principles of similarity theory. These methods consist of forming a model of the research object, conducting experimental studies of the model, and recalculating the obtained values ​​from the model to a natural object. For example, by conducting accelerated tests, the durability of the product under forced (harsh) operating conditions is first determined, and then the durability under real operating conditions is determined using appropriate formulas and graphs.

3. Statistical methods, of which the extrapolation method is most widely used. It is based on patterns of changes in predicted parameters over time. To describe these patterns, select the simplest possible analytical function with a minimum number of variables.

Thus, through statistical processing, a parameter is determined that serves as a diagnostic sign of the technical condition of the engine, for example, crankcase gas breakthrough or oil consumption. Based on this parameter, the residual resource is predicted. It should be taken into account that the actual resource may fluctuate around the obtained value.

The main reasons for inaccurate forecasting are insufficient completeness, reliability and homogeneity of information ( homogeneous refers to information about identical products operated under the same conditions), low qualifications forecaster

The effectiveness of forecasting is determined by changes in the reliability indicator as a result of the implementation of recommended means of increasing it.

Materials for practical lessons No. 6 and 7.

Reliability prediction.

Reliability prediction. Predicting reliability taking into account preliminary information. Using indirect signs of failure prediction. Individual reliability prediction. Individual prediction of reliability using the pattern recognition method (Procedure for testing. Procedure for training the recognition function. Procedure for predicting product quality. An example of a method for individual prediction of product quality.).

PZ.6-7.1. Reliability prediction.

In accordance with current GOSTs in technical specifications on the designed products (objects) are recorded experimental confirmation requirements a given level of reliability taking into account the existing loads.

For highly reliable objects (for example, space technology), this requirement is overly tough(in the sense of the need to test a large number of similar objects) and not always practically feasible. In fact, in order to confirm the probability of failure-free operation P = 0.999 with a 95% confidence probability, 2996 successful tests should be carried out. If at least one test is unsuccessful, then the number of required tests will increase even more. To this should be added a very long test duration, since many objects must combine a high level of reliability with a long operating time (resource). It follows from this important requirement: when assessing reliability, it is necessary to take into account all accumulated preliminary information about the reliability of technical objects.

Forecasting reliability and failures is a prediction of expected reliability indicators and the probability of failures in the future based on information obtained in the past, or on the basis of indirect predictive signs.

Reliability calculations at the product design stage have the features of such forecasting, since an attempt is made to foresee the future state of a product that is still at the development stage.

Some of the tests discussed above contain elements of predicting the reliability of a batch of products based on the reliability of their sample, for example, according to test schedule. These forecasting methods are based on the study of statistical patterns of failures.

But it is possible to predict reliability and failures based on studying the factors causing failures. In this case, along with statistical patterns, physical and chemical factors affecting reliability are also considered, which complicates its analysis, but makes it possible to reduce its duration and make it more informative.

PZ.6-7.2. Predicting reliability taking into account preliminary information.

When assessing reliability, it is necessary to take into account all accumulated preliminary information about the reliability of technical objects. For example, it is important to combine the calculated information obtained at the preliminary design stage with the results of testing the object. In addition, the tests themselves are also very diverse and are carried out at different stages of the creation of an object and at different levels of its assembly (elements, blocks, units, subsystems, systems). Taking into account information characterizing changes in reliability in the process of improving an object makes it possible to significantly reduce the number of tests necessary for experimental confirmation of the achieved level of reliability.

In the process of creating technical objects, tests are carried out. Based on the analysis of the results of these tests, changes are made to the design aimed at improving their characteristics. Therefore, it is important to evaluate how effective these measures were and whether the reliability of the facility actually improved after the changes were made. Such an analysis can be performed using methods of mathematical statistics and mathematical models of changes in reliability.

If the probability of some event in a single experiment is equal to R and at n independent experiments, this event (failure) occurred m times, then the confidence limits for p found as follows:

Case 1. Let m¹ 0 , Then:

(PZ.6-7.2.)

where are the coefficients R 1 And R 2 are taken from the corresponding statistical tables.

Case 2. Let m=0, Then pH=0, and the upper bound is

. (PZ.6-7.3.)

Calculation R0 is produced by the equation

(PZ.6-7.4.)

One-sided confidence probabilities g 1 And g 2 associated with two-sided confidence level γ * known addiction

(PZ.6-7.5.)

Bench and ground tests provide basic information about the reliability of the object. Based on the results of such tests, they determine reliability indicators. If technical product is a complex system, and the reliability of some elements is determined experimentally, and some by calculation, then to predict the reliability of a complex system it is used method of equivalent parts.

During flight tests receive additional information about the reliability of the object and this information should be used to clarify and adjust the reliability indicators obtained during bench tests. Let it be necessary to clarify lower limit probability of failure-free operation of an object that has passed bench ground tests and flight tests and at the same time m=0.

Loading...