Professor Michael Todinov
MSc, PhD, DEng/DSc
Professor in Mechanical Engineering
School of Engineering, Computing and Mathematics
Role
Michael Todinov conducts research and teaching in the area of Reliability, Risk , Probabilistic modelling, Applications of algebraic inequalities, Network optimization, Uncertainty quantification, Mechanics of materials and Engineering Mathematics. From the University of Birmingham, Michael Todinov holds a PhD related to mathematical modelling of thermal and residual stresses and a higher doctorate Doctor of Engineering (DEng) which is the engineering equivalent of Doctor of Science (DSc). The higher doctorate was awarded for fundamental contributions in the area of new probabilistic concepts and models in Engineering.
Areas of expertise
- Reliability and risk modelling, Uncertainty quantification
- General methods for improving reliability and reducing risk
- Mechanics of Materials, Material Science and Avanced Stress analysis
- Computer Science, Algorithms, Discrete mathematics
- Non-trivial algebraic inequalities and their applications
- Applied probability, probabilistic modelling, Monte Carlo simulation techniques
- Modelling and simulation of heat and thermochemical treatment of materials
- Stochastic flow networks, repairable flow networks. static flow networks, networks with
- disturbed flows, reliability networks, stochastic graphs
- Mathematical optimisation and optimisation algorithms under uncertainty
- Advanced C/C++ programming
- MATLAB programming
Teaching and supervision
Modules taught
- Engineering Reliability and Risk Management
- Engineering Mathematics and Modelling
- Fracture Mechanics
- Advanced Stress Analysis
- MATLAB programming and modelling with MATLAB
Research
M.Todinov's name is associated with creating the method of algebraic inequalities for generating new knowledge in science and technology which can be used for optimisation of systems and processes, the foundations of risk-based reliability analysis (driven by the cost of failure), the theory of repairable flow networks and networks with disturbed flows and the introduction of new domain-independent methods for improving reliability and reducing risk. M.Todinov also created analytical methods for evaluating the risk associated with overlapping of random events on a time interval.
A sample of M.Todinov's results includes: the discovery of closed and dominated parasitic flow loops in real networks; the proof that the Weibull distribution is an incorrect model for the distribution of breaking strength of materials and deriving the correct alternative of the Weibull model; a theorem regarding the exact upper bound of properties from random sampling of multiple sources; a general equation for the probability of failure of brittle components with complex shape, the formulation and proof of the necessary and sufficient conditions of the Palmgren-Miner rule and Scheil's additivity rule; deriving the correct alternative of the Johnson-Mehl-Avrami-Kolmogorov equation; formulating the dual network theorems for static flows networks and networks with disturbed flows; discovering the binomial expansion model for evaluating risk associated with overlapping random events on a time interval, developing the methods of separation, segmentation, self-reinforcement (self-strengthening) and inversion as domain-independent methods for improving reliability and reducing risk.
M.Todinov’s research has been funded by the automotive industry, nuclear industry, the oil and gas industry and various research councils.
Research grants and awards
- Recipient of the 2017 prestige IMechE award for risk reduction in Mechanical Engineering (IMechE, UK, 2017)
- Recipient of a best lecturer teaching award, as voted by students (Cranfield University, 2005)
- High-speed algorithms for the output flow in stochastic flow networks, (2009-2013), research project funded by The Leverhulme Trust, UK.
- High-speed algorithms for the output flow in stochastic flow networks with tree topology, (2007-2008), consultancy project funded by British Petroleum.
- Reliability Value Analysis for BP Taurt Development (2005-2006), consultancy project funded by Cooper Cameron.
- Reliability allocation in complex systems based on minimizing the total cost (2004-2007), research project funded by by EPSRC.
- Modelling the probability of failure of mechanical components caused by defects (2003-2005), research project sponsored by British Petroleum.
- Developing the BP reliability strategy, generic models and software tools for reliability analysis and setting reliability requirements based on cost of failure and minimum failure-free operating periods (2002-2004), research project funded by British Petroleum.
- Modelling a single-channel AET production system versus a dual-channel AET system (2005), consultancy project sponsored by Total.
- Reliability case for all-electric subsea control system (2004), consultancy project funded by BP and Total.
- Modelling the uncertainty associated with the ductile-to-brittle transition temperature of inhomogeneous welds (2002), research project funded by NII/HSE, UK.
- Developing efficient statistical models and software for determining the uncertainty in the location of the ductile-to-brittle transition region for multi-run welds (2001-2002), research project sponsored by the Nuclear Installations Inspectorate, HSE/NII, UK.
- Developing efficient statistical methods and software for fitting the variation of the impact energy in the ductile/brittle transition region for sparse data sets (1998-2000), research project sponsored by the Nuclear Installations Inspectorate, HSE/NII, UK.
- Statistical modelling of Brittle and Ductile Fracture in Steels, research project funded by EPSRC (1998-2000).
- Probabilistic Approach for Fatigue Design and Optimisation of Cast Aluminium Structures (1997-1998) research project funded by EPSRC.
- Modelling the temporal and residual stresses of Si-Mn automotive suspension springs, (1994-1997), research project funded by EPSRC and DTI.
- Six research projects related to mathematical modelling of heat- and mass-transfer during heat treatment of steels and mathematical modelling of non-isothermal phase transformation kinetics during heat treatment of steels, funded by the Bulgarian Ministry of Science and Education in the period (1988-1994).
- Optimal guillotine cutting out of one-and two-dimensional stock in the batch production, (1986-1987), research project funded by the Union of the Mathematicians, Bulgaria.
Research impact
- Creating the method of algebraic inequalities for generating new knowledge in science and technology
- Creating the foundations of the theory of repairable flow networks and networks with disturbed flows. High-speed algorithms for analysis, optimisation and control in real time of repairable flow networks.
- Discovering the existence of closed and dominated flow loops in real networks and developing algorithms for their removal.
- Developing new domain-independent methods for reiability improvement and risk reduction.
- Creating the foundations of risk-based reliability analysis – driven by the cost of system failure. Formulation of the principle of risk-based design.
- Creating the theoretical foundations of the maximum risk reduction attained within limited risk-reduction resources.
- Creating the theoretical foundations for evaluating the risk associated with overlapping random demands on a time interval.
- Introducing the concept 'stochastic separation' and a new reliability measure based on stochastic separation.
- Introducing the method of 'stochastic pruning' and creating on its basis ultra-fast algorithms for determining the production availability of complex networks.
- Formulation and proof of the upper bound variance theorem regarding the exact upper bound of properties from sampling multiple sources.
- Formulation and proof of the damage factorisation theorem – the necessary and sufficient condition for the validity of the Palmgren-Miner rule.
- An equation for the probability of fracture controlled by random flaws for components with complex shape.
- Theoretical and experimental proof that the Weibull distribution does not describe correctly the probability of failure of materials with flaws and a derivation of the correct alternative.
- A general equation related to reliability dependent on the relative configurations of random variables.
- Revealing the drawbacks of the maximum expected profit criterion in the case of risky prospects containing a limited number of risk-reward bets.
- Formulation and proof of the damage factorisation theorem – the necessary and sufficient condition for the validity of the Palmgren-Miner rule.
- An equation for the probability of fracture controlled by random flaws for components with complex shape.
- Theoretical and experimental proof that the Weibull distribution does not describe correctly the probability of failure of materials with flaws and the derivation of the correct alternative.
- A general equation related to reliability dependent on the relative configurations of random variables.
- Revealing the drawbacks of the maximum expected profit criterion in the case of risky prospects containing a limited number of risk-reward bets.
Groups
Publications
Journal articles
-
Todinov M, 'Lightweight Designs and Improving the Load-Bearing Capacity of Structures by the Method of Aggregation'
Mathematics 12 (10) (2024)
ISSN: 2227-7390 eISSN: 2227-7390AbstractPublished here Open Access on RADARAbstract: The paper introduces a powerful method for developing lightweight designs and enhancing
the load-bearing capacity of common structures. The method, referred to as the ‘method of
aggregation’, has been derived from reverse engineering of sub-additive and super-additive algebraic
inequalities. The essence of the proposed method is consolidating multiple elements loaded in
bending into a reduced number of elements with larger cross sections but a smaller total volume of
material. This procedure yields a huge reduction in material usage and is the first major contribution
of the paper. For instance, when aggregating eight load-carrying beams into two beams supporting
the same total load, the material reduction was more than 1.58 times. The second major contribution
of the paper is in demonstrating that consolidating multiple elements loaded in bending into a
reduced number of elements with larger cross sections but the same total volume of material leads
to a big increase in the load-bearing capacity of the structure. For instance, when aggregating eight
cantilevered or simply supported beams into two beams with the same volume of material, the loadbearing
capacity until a specified tensile stress increased twice. At the same time, the load-bearing
capacity until a specified deflection increased four times. -
M.T.Todinov, 'Can system reliability be predicted from average component reliabilities?'
Safety and Reliability 42 (4) (2023) pp.214-240
ISSN: 0961-7353 eISSN: 2469-4126AbstractPublished here Open Access on RADARThe paper reveals that a prediction of system reliability on demand based on
average reliabilities on demand of components is a fundamentally flawed
approach. A physical interpretation of algebraic inequalities demonstrated that
assuming average component reliabilities on demand entails an overestimation
of the system reliability on demand for systems with components logically
arranged in series and series-parallel and underestimation of the reliability on
demand for systems with components logically arranged in parallel. The key reason
for these discrepancies is the variability of components from the same type.
Techniques for countering variability by promoting asymmetric response through
inversion have also been introduced. The paper demonstrates that variability during
assembly operations can affect negatively the reliability of mechanical systems.
Accordingly, techniques for reducing the variability of stresses during
assembly operations have been discussed. Finally, the paper provides a discussion
related to the reasons for the relatively slow adoption of domain-independent
methods for improving reliability despite their numerous advantages. -
Todinov MT, 'Reverse engineering of algebraic inequalities for system reliability predictions and enhancing processes in engineering'
IEEE Transactions on Reliability 73 (2) (2023) pp.902-911
ISSN: 0018-9529 eISSN: 1558-1721AbstractPublished here Open Access on RADARThe paper examines the profound impact on the forecasted system reliability when one assumes average reliabilities on demand for components of various kinds but of the same type. In this paper, we use reverse engineering of a novel algebraic inequality to demonstrate that the prevalent practice of using average reliability on demand for components of the same type but different varieties to calculate system reliability on demand is fundamentally flawed.
This approach can introduce significant errors due to the innate variability of components within a given type.
Additionally, the paper illustrates the optimization of engineering processes using reverse engineering of sub-additive algebraic inequalities based on concave power laws. Employing reverse engineering on these sub-additive inequalities has paved the way for strategies that enhance the performance of diverse industrial processes. The primary advantage of these sub-additive inequalities lies in their simplicity, rendering them particularly suitable for reverse engineering.
-
Todinov M, 'On the use of analytical inequalities for improving reliability and reducing risk'
International Journal of Risk Assessment and Management 26 (1) (2023) pp.1-16
ISSN: 1466-8297 eISSN: 1741-5241AbstractPublished here Open Access on RADARThe paper demonstrates a new domain-independent method for improving reliability and reducing risk including two fundamental approaches: the forward approach, based on deriving algebraic inequalities from real systems and processes; and the inverse approach, based on deriving new knowledge by meaningful interpretation of existing correct algebraic inequalities. The forward approach has been used to prove the domain-independent principle of the well-ordered systems which are characterised by the smallest possible risk of failure. The inverse approach has been used to generate new knowledge related to the relationship of the equivalent elastic constants of elements arranged in series and parallel and the upper and lower bounds of the percentage of faulty components in pooled batches of components with unknown sizes.
-
Todinov M, 'Enhancing the reliability of series-parallel systems with multiple redundancies by using system-reliability inequalities'
ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part B: Mechanical Engineering 9 (4) (2023)
ISSN: 2332-9017 eISSN: 2332-9025AbstractPublished here Open Access on RADARThe reverse engineering of a valid algebraic inequality often leads to a projection of a novel physical reality characterized by a distinct signature: the algebraic inequality itself. This paper uses reverse engineering of valid algebraic inequalities for generating new knowledge and substantially improving the reliability of common series-parallel systems. Our study emphasizes that in the case of series-parallel systems with interchangeable redundant components, the asymmetric arrangement of components always leads to higher system reliability than a symmetric arrangement. This finding remains valid, irrespective of the particular reliabilities characterizing the components. Next, the paper presents novel system reliability inequalities whose reverse engineering enabled significant enhancement of the reliability of series-parallel systems with asymmetric arrangements of redundant components, without knowledge of the individual component reliabilities. Lastly, the paper presents a new technique for validating complex algebraic inequalities associated with series-parallel systems. This technique relies on permutation of variable values and the method of segmentation.
-
Todinov MT, 'Reliability-Related Interpretations of Algebraic Inequalities'
IEEE Transactions on Reliability [online first] (2023)
ISSN: 0018-9529 eISSN: 0018-9529AbstractPublished here Open Access on RADARNew results related to maximizing the reliability of common systems with interchangeable redundancies at a component level have been obtained by using the method of algebraic inequalities. It is shown that for systems with independently working components with interchangeable redundancies, the system reliability
corresponding to a symmetric arrangement of the redundant components is always inferior to the system reliability corresponding to an asymmetric arrangement of the redundant components, irrespective of the probabilities of failure of the different types of components. It is also shown that for series–parallel systems, the system reliability is maximized by arranging the main components in ascending order of their probabilities of failure, whereas the redundant components are arranged in descending order of their
probabilities of failure. Finally, this article derives rigorously the highly counterintuitive result that if two components must be selected from n batches containing reliable and faulty components with unknown proportions, the likelihood that both components will be reliable is maximized by selecting both components from a randomly selected batch. -
Todinov MT, 'Optimising processes and generating knowledge by interpreting a new algebraic inequality'
International Journal of Modelling, Identification and Control 41 (1-2) (2022) pp.98-109
ISSN: 1746-6172 eISSN: 1746-6180AbstractPublished hereThis paper focuses on optimising processes and generating knowledge based on interpreting a new algebraic inequality. An interpretation of the new inequality yielded a strategy for reducing the amount of pollutants released from an industrial process. An alternative interpretation of the same inequality established that the deflection of n elastic elements connected in series is at least n^2 times larger than the deflection of the same elements connected in parallel, irrespective of the individual stiffness values of the elements. In addition, an
alternative interpretation of the new inequality yielded a counter-intuitive result concerning improving the chances of picking a winning lottery ticket. Finally, the paper introduces a method for improving reliability by increasing the level of balancing and novel interpretations of algebraic inequalities related to this method. This is done by assessing the probability of selecting items of the same variety and determining the lower and upper bounds of this probability. -
Todinov MT, 'Improving reliability by increasing the level of balancing and by substitution'
Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 237 (7) (2022) pp.1550-1562
AbstractPublished hereProblems with the current methods for reliability improvement are discussed as well as two generic methods for
reliability improvement. The paper argues that reliability improvement is underpinned by common principles that
provide key input to the design process. The domain-independent methods change the way engineers and scientists
approach reliability improvement. The presented generic methods encourage simple low-cost solutions as opposed to
some traditional high-cost solutions based on introducing redundancy, condition monitoring, reinforcement and use of
expensive materials. The domain-independent methods allow engineers and scientists in a particular domain to access
excellent solutions and practices for eliminating failure modes in other domains. In this way, the constant ‘reinventing
of the wheel’ is avoided. As part of the presented approach, a generic method for increasing reliability by increasing the
level of balancing and by substitution have been presented. In addition, a new classification of techniques related to
increasing the level of balancing has been introduced and discussed for the first time. The paper also proves rigorously
that if two components must be selected from n batches containing reliable and faulty components with unknown
proportions, the likelihood that both components will be reliable is maximised by selecting the components from a
randomly selected batch. -
Todinov MT, 'Probabilistic interpretation of algebraic inequalities related to reliability and risk'
Quality and Reliability Engineering International 237 (7) (2022) pp.1550-1562
ISSN: 0748-8017 eISSN: 1099-1638AbstractPublished hereThe paper explores the probabilistic interpretations of algebraic inequalities and
presents several findings. First, the inequality of the additive ratios can be used
to increase the probability of an event occurring within a set of mutually exclusive
and exhaustive events. The interpretation of this inequality produced a
counter-intuitive result, that for suppliers delivering the same quantity of reliable
products alongside unknown numbers of unreliable products, the probability
of purchasing a reliable product from a randomly selected supplier is higher
than the probability of purchasing a reliable product from the market formed
by all suppliers. Next, the paper discusses how averaging the reliabilities of
components from different varieties can lead to a significant overestimation of
the calculated system reliability, as demonstrated by another algebraic inequality
interpretation. Finally, the paper derives tight bounds for the reliability of
demand in a load-strength interference model by interpreting the Hayashi’s
inequality. Notably, these bounds do not depend on the shape of the load distribution
and only require the strength distribution to be known in relatively small
vicinities of the lower and upper bound of the load. -
Todinov MT, 'A general class of algebraic inequalities for generating new knowledge and optimising the design of systems and processes'
Research in Engineering Design 33 (2022) pp.161-171
ISSN: 0934-9839 eISSN: 1435-6066AbstractPublished here Open Access on RADARA special class of general inequalities has been identified that provides the opportunity for generating new knowledge that can be used for optimising systems and processes in diverse areas of science and technology. It is demonstrated that inequalities belonging to this class can always be interpreted meaningfully if the variables and separate terms of the inequalities represent additive quantities. The meaningful interpretation of a new algebraic inequality based on the proposed general class of inequalities led to developing a light-weight design for a supporting structure based on cantilever beams, reducing
the maximum force upon impact, generating new knowledge about the deflection of elastic elements connected in parallel and series and optimising the allocation of resources to maximise expected benefit. The interpretation of the new inequality yielded that the deflection of elastic elements connected in parallel is at least n^2 times smaller than the deflection of the same elastic elements connected in series, irrespective of the individual stiffness values of the elastic elements. The interpretation of another algebraic inequality from the proposed general class led to a method for decreasing the stiffness of a mechanical
assembly by cyclic permutation of the elastic elements building the assembly. The analysis showed that a decrease of stiffness exists only if asymmetry of the stiffness values in the connected elements is present. -
Todinov M, 'Optimised design of systems and processes using algebraic inequalities'
Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 236 (8) (2021) pp.3912-3921
ISSN: 0954-4062 eISSN: 2041-2983AbstractPublished hereA method for optimising the design of systems and processes has been introduced that consists of interpreting the left- and the right-hand side of a correct algebraic inequality as the outputs of two alternative design configurations delivering the same required function. In this way, on the basis of an algebraic inequality, the superiority of one of the configurations is established. The proposed method opens wide opportunities for enhancing the performance of systems and processes and is very useful for design in general. The method has been demonstrated on systems and processes from diverse application domains.
The meaningful interpretation of an algebraic inequality based on a single-variable sub-additive function led to developing a light-weight design for a supporting structure based on cantilever beams. The interpretation of a new algebraic inequality based on a multivariable sub-additive function led to a method for increasing the kinetic energy absorbing capacity during inelastic impact. The interpretation of a new inequality has been used for maximising the mass of deposited substance during electrolysis and for generating new knowledge about the deflection of elastic elements connected in parallel and series. -
Todinov M, 'Generation of new knowledge and optimisation of systems and processes through meaningful interpretation of algebraic inequalities'
International Journal of Mathematical Modelling and Numerical Optimisation 11 (4) (2021) pp.428-449
ISSN: 2040-3607 eISSN: 2040-3615AbstractPublished here Open Access on RADARThe paper introduces a method for increasing the impact of additive quantities by meaningful interpretation of multivariate sub-additive and super-additive functions. The paper demonstrates that the segmentation of additive quantities through sub-additive and super-additive functions can be used to generate new knowledge and optimise systems and processes and the presented algebraic inequalities are applicable to any area of science and technology. The meaningful interpretation of the modified Cauchy-Schwarz inequality, led to a method for increasing of the power output from a voltage source and to a method for increasing the capacity for absorbing strain energy of loaded mechanical components. It was found that the existence of asymmetry is essential to increasing the strain energy absorbing capacity and the power output. Loaded elements experiencing the same displacement do not yield an increase of the absorbed strain energy. Similarly, loaded resistances experiencing the same current do not yield an increase of the power output. Finally, the meaningful interpretation of an algebraic inequality in terms of potential energy, resulted in a general necessary condition for minimising the sum of powers of distances to a fixed number of points in space.
-
Todinov M, 'Meaningful interpretation of algebraic inequalities to achieve uncertainty and risk reduction'
Proceedings of the Institution of Mechanical Engineers, Part O: Journal of Risk and Reliability 236 (5) (2021) pp.841-854
ISSN: 1748-006X eISSN: 1748-0078AbstractPublished hereThe paper develops an important method related to using algebraic inequalities for uncertainty and risk reduction and enhancing systems performance. The method consists of creating relevant meaning for the variables and different parts of the inequalities and linking them with real physical systems or processes. The paper shows that inequalities based on multivariable sub-additive functions can be interpreted meaningfully and the generated new knowledge used for optimising systems and processes in diverse areas of science and technology. In this respect, an interpretation of the Bergstrom inequality, which is based on a sub-additive function, has been used to increase the accumulated strain energy in components
loaded in tension and bending. The paper also presents an interpretation of the Chebyshev’s sum inequality that can be used to avoid the risk of overestimation of returns from investments and an interpretation of a new algebraic inequality that can be used to construct the most reliable series-parallel system. The meaningful interpretation of other algebraic inequalities yielded a highly counter-intuitive result related to assigning devices of different types to missions composed of identical tasks. In the case where the probabilities of a successful accomplishment of a task, characterising the devices, are unknown, the best strategy for a successful accomplishment of the mission consists of selecting randomly an arrangement including devices of the same type. This strategy is always correct, irrespective of existing
uknown interdependencies among the probabilities of successful accomplishment of the tasks characterising the devices. -
Michel Todinov, 'Reducing uncertainty and obtaining superior performance by segmentation based on algebraic inequalities'
International Journal of Reliability and Safety 14 (2/3) (2020) pp.103-115
ISSN: 1479-389X eISSN: 1479-3903AbstractPublished here Open Access on RADARThe paper demonstrates for the first time uncertainty reduction and attaining superior performance through segmentation based on algebraic inequalities. Meaningful interpretation of algebraic inequalities has been used for generating new knowledge in unrelated application domains. Thus, the method of segmentation through an abstract inequality led to a new theorem related to electrical circuits. The power output from a source with particular voltage, on elements connected in series, is smaller than the total power output
from the segmented sources applied to the individual elements. Segmentation attained through the same abstract inequality led to another new theorem related to electrical capacitors. The energy stored by a charge of given size on a single capacitor is smaller than the total energy stored in multiple capacitors with the same equivalent capacity, by segmenting the initial charge over the separate capacitors. Finally, inequalities based on sub-additive and superadditive functions have been introduced for reducing uncertainty and obtaining
superior performance by a segmentation or aggregation of controlling factors. By a meaningful interpretation of sub-additive and super-additive inequalities, superior performance has been achieved for processes described by a powerlaw dependence. -
Michael Todinov, 'Using algebraic inequalities to reduce uncertainy and risk'
ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part B: Mechanical Engineering 6 (4) (2020)
ISSN: 2332-9017AbstractPublished here Open Access on RADARThe paper discusses applications of the domain-independent method of algebraic inequalities, for reducing uncertainty and risk. Algebraic inequalities are used for revealing the intrinsic reliability of competing systems and ranking the systems in terms of reliability in the absence of knowledge related to the reliabilities of their components. An algebraic inequality has also been used to establish the principle of the well-ordered parallel-series systems which, in turn, has been applied to maximise the reliability of common parallel-series systems.
The paper introduces a method of linking an abstract inequality to a real process by a meaningful interpretation of the variables entering the inequality and its left- and right-hand part. The meaningful interpretation of a simple algebraic inequality led to a counter-intuitive result. If two items from varieties are present in a large batch, the probability of selecting randomly two items of different variety does not exceed 0.5.
-
Todinov M, 'On two fundamental approaches for reliability improvement and risk reduction by using algebraic inequalities'
Quality and Reliability Engineering International 37 (2) (2020) pp.820-840
ISSN: 0748-8017AbstractPublished here Open Access on RADARThe paper introduces two fundamental approaches for reliability improvement and risk reduction by using nontrivial algebraic inequalities: (a) by proving an inequality derived or conjectured from a real system or process and (b) by creating meaningful interpretation of an existing nontrivial abstract inequality
relevant to a real system or process. A formidable advantage of the algebraic inequalities can be found in their capacity to produce tight bounds related to reliability-critical design parameters in the absence of any knowledge about the variation of the controlling variables. The effectiveness of the first approach has
been demonstrated by examples related to decision-making under deep uncertainty and examples related to ranking systems built on components whose reliabilities are unknown. To demonstrate the second approach, meaningful interpretation has been created for an inequality that is a special case of the Cauchy-Schwarz inequality. By varying the interpretation of the variables, the same inequality holds for elastic elements, resistors, and capacitors arranged in series and parallel. The paper also shows that meaningful interpretation of superadditive and subadditive inequalities can be used with success for optimizing various systems and processes. Meaningful interpretation of superadditive and subadditive inequalities has been used for maximizing the stored elastic strain energy at a specified total displacement and for optimizing the profit from
an investment. Finally, meaningful interpretation of an algebraic inequality has been used for reducing uncertainty and the risk of incorrect prediction about the magnitude ranking of sequential random events. -
Todinov M, 'REDUCING THE RISK OF FAILURE BY DELIBERATE WEAKNESSES'
International Journal of Risk and Contingency Management 9 (2) (2020) pp.33-53
ISSN: 2160-9624AbstractPublished here Open Access on RADARThe deliberate weaknesses are points of weakness towards which a potential failure is channelled in order to limit the magnitude of the consequences from failure. The paper shows that reducing risk by deliberate weaknesses is a powerful domain-independent method which transcends mechanical engineering and works in various unrelated areas of human activity. A classification has been proposed of categories and classes of deliberate weaknesses reducing risk as well as discussion related to the underlying mechanisms of risk reduction. It is shown that introducing and repositioning existing weaknesses is an effective risk-reduction strategy which transcends engineering and can be applied in many unrelated domains. The paper shows that in the case where the cost of failure of the separate components in a system varies significantly, an approach based on deliberate weaknesses has a significant advantage to the equal-reliability/equal-strength design approach.
-
Todinov M, 'Improving reliability and reducing risk by using inequalities'
Safety and Reliability 38 (4) (2019) pp.222-245
ISSN: 0961-7353 eISSN: 2469-4126AbstractPublished here Open Access on RADARThe paper introduces a powerful domain-independent method for improving reliability and reducing risk based on algebraic inequalities, which transcends mechanical engineering and can be applied in many unrelated domains. The paper demonstrates the application of inequalities to reduce the risk of failure by producing tight uncertainty bounds for properties and risk-critical parameters. Numerous applications of the upper-bound-variance inequality have been demonstrated in bounding uncertainty from multiple sources, among which is the estimation of uncertainty in setting positioning distance and increasing the robustness of electronic devices. The rearrangement inequality has been used to maximise the reliability of components purchased from suppliers. With the help of the rearrangement inequality, a highly counter-intuitive result has been obtained. If no information about the component reliability characterising the individual suppliers is available, purchasing components from a single supplier or from the smallest possible number of suppliers maximises the probability of a high-reliability assembly. The Cauchy-Schwartz inequality has been applied for determining sharp bounds of mechanical properties and the Chebyshev's inequality for determining a lower bound for the reliability of an assembly. The inequality of the inversely correlated random events has been introduced and applied for ranking risky prospects involving units with unknown probabilities of survival.
-
Michael Todinov, 'Reliability improvement and risk reduction by inequalities and segmentation'
Proceedings of the Institution of Mechanical Engineers, Part O: Journal of Risk and Reliability 234 (1) (2019) pp.63-73
ISSN: 1748-006X eISSN: 1748-0078AbstractPublished here Open Access on RADARThe paper introduces new domain-independent methods for improving reliability and reducing risk based on algebraic inequalities and chain-rule segmentation. Two major advantages of algebraic inequalities for reducing risk have been demonstrated: (i) ranking risky prospects in the absence of any knowledge related to the individual building parts and (ii) reducing the variability of a risk-critical critical output parameter. The paper demonstrates a highly counter-intuitive result derived by using inequalities: if no information about the component reliability characterising the individual suppliers is available, purchasing components from a single supplier or from the smallest possible number of suppliers maximises the probability of a high-reliability assembly. The paper also demonstrates the benefits from combining domain-independent methods
and domain-specific knowledge for achieving risk reduction in several unrelated domains: decision-making, manufacturing, strength of components and kinematic analysis of complex mechanisms. In this respect, the paper introduces the chain rule segmentation method and applies it to reduce the risk of computational errors in kinematic analysis of complex mechanisms. The paper also demonstrates that combining the domain-independent method of segmentation and domain-specific knowledge in stress analysis leads to a significant reduction of the internal stresses and reduction of the risk of overstress failure. -
Todinov MT, 'Domain-independent approach to risk reduction'
Journal of Risk Research 23 (6) (2019) pp.796-810
ISSN: 1366-9877 eISSN: 1466-4461AbstractPublished here Open Access on RADARThe popular domain-specific approach to risk reduction created the illusion that efficient risk reduction can be delivered successfully solely by using methods offered by the specific domain. As a result, many industries have been deprived from efficient risk reducing strategy and solutions. This paper argues that risk reduction is underlined by domain-independent methods and principles which, combined with knowledge from the specific domain, help to generate effective risk reduction solutions. In this respect, the paper introduces a powerful method for reducing the likelihood of computational errors based on combining the domain-independent method of segmentation and local knowledge of the chain rule for differentiation. The paper also demonstrates that lack of knowledge of domain-independent principles for risk reduction misses opportunities to reduce the risk of failure even in such mature field like stress analysis. The domain-independent methods for risk reduction do not rely on reliability data or knowledge of physical mechanisms underlying possible failure modes and are particularly well suited for developing new designs, with unknown failure mechanisms and failure history. In many cases, the reliability improvement and risk reduction by using the domain-independent methods reduces risk at no extra cost or at a relatively small cost. The presented domain-independent methods work across totally unrelated domains and this is demonstrated by the supplied examples which range from various areas of engineering and technology, computer science, project management, health risk management, business and even mathematics. The domain-independent risk reduction methods presented in this paper promote building products and systems characterised by high-reliability and resilience.
-
Todinov MT, 'Reliability Improvement and Risk Reduction through Self-reinforcement'
International Journal of Risk Assessment and Management 22 (1) (2018) pp.18-43
ISSN: 1466-8297AbstractThe method of self-reinforcement has been introduced as a domain-independent method for improving reliability and reducing risk. A key feature of self-reinforcement is that increasing the external/internal forces intensifies the system‘s response against these forces. As a result, the driving net force towards precipitating failure is reduced. In many cases, the self-reinforcement mechanisms achieve remarkable reliability increase at no extra cost. Two principal ways of self-reinforcement have been identified: reinforcement by capturing a proportional compensating factor and reinforcement by using feedback loops. Mechanisms of transforming forces and motion into self-reinforcing response have been introduced and demonstrated through appropriate examples. Mechanisms achieving selfreinforcement response by self-aligning, self-anchoring and modified geometry have also been introduced For the first time, the potential of positive feedback loops to achieve self-reinforcement and risk reduction was demonstrated. In this respect, it is shown that self-energizing, fast growth and fast transition provided by positive feedback loops can be used with success for achieving reliability improvement. Finally, a classification was proposed of methods and techniques for reliability improvement and risk reduction based on the method of self-reinforcement.Published here Open Access on RADAR -
Todinov MT, 'IMPROVING RELIABILITY AND REDUCING RISK BY MINIMIZING THE RATE OF DAMAGE ACCUMULATION'
Safety and Reliability 37 (2/3) (2018) pp.148-176
ISSN: 0961-7353 eISSN: 2469-4126AbstractThe paper introduces the principle of minimized rate of damage accumulation as a domain-independent principle of reliability improvement and risk reduction. A classification is proposed of methods for reducing the rate of damage accumulation. The paper introduces the method of substitution for reducing the rate of damage accumulation. The original assembly/system is substituted with assembly/system performing the same function and based on different physical principles. Such a substitution often eliminates failure modes characterised by intensive damage accumulation. One of the methods discussed is an optimal replacement resulting in the smallest rate of damage accumulation and maximum system reliability. A method for achieving the smallest rate of damage accumulation for a system with components logically arranged in series has been proposed for the first time. A dynamic programming algorithm for determining the optimal variation of multiple damage-inducing factors to minimize the rate of damage accumulation, has also been proposed for the first time. The paper shows that the necessary and sufficient condition for using the additivity rule for calculating the threshold of accumulated damage precipitating failure is the factorisation of the rate of damage accumulation into a function of the amount of damage and a function of the damage-inducing factor.Published here Open Access on RADAR -
Todinov M, 'Closed parasitic flow loops and dominated loops in networks'
International Journal of Operational Research 36 (4) (2017) pp.555-590
ISSN: 1745-7645AbstractThe paper raises awareness of the presence of closed parasitic flow loops in the solutions of published algorithm for maximising the throughput flow in networks. If the rooted commodity is interchangeable commodity, a closed parasitic loop can effectively be present even if the routed commodity does not physically travel along a closed loop. The closed parasitic flow loops are highly undesirable loops of flow, which effectively never leave the network. Parasitic flow loops increase the cost of transportation of the flow unnecessarily, consume residual capacity from the edges of the network, increase the likelihood of deterioration of perishable products, increase congestion and energy wastage. Accordingly, the paper presents a theoretical framework related to parasitic flow loops in networks. By using the presented framework, it is demonstrated that the probability of existence of closed and dominated flow loops in networks is surprisingly high.Published here Open Access on RADARThe paper also demonstrates that the successive shortest path strategy for minimising the total length of transportation routes from multiple interchangeable origins to multiple destinations fails to minimise the total length of the routes. It is demonstrated that even in a network with multiple origins and a single destination, the successive shortest path strategy could still fail to minimise the total length of the routes. By using the developed theoretical framework, it is shown that a minimum total length of the transportation routes in a network with multiple interchangeable origins, is attained if and only if no closed parasitic flow loops and dominated flow loops exist in the network. Accordingly, an algorithm for minimising the total length of the transportation routes by eliminating all dominated parasitic flow loops is proposed.
-
Todinov MT, 'Mechanisms for improving reliability and reducing risk by stochastic and deterministic separation'
Journal of Risk Research 22 (4) (2017) pp.448-474
ISSN: 1366-9877 eISSN: 1466-4461AbstractPublished here Open Access on RADARThe paper provides for the first time a comprehensive introduction into the mechanisms through which the method of separation achieves risk reduction and into the ways it can be implemented in engineering designs. The concept stochastic separation of critical random events on a time interval, which consists of guaranteeing with a specified probability a specified degree of distancing between the random events, is introduced. Efficient methods for providing stochastic separation by reducing the duration times of overlapping critical random events on a time interval are presented. The paper shows that the probability of overlapping of critical events, randomly appearing on a time interval, is practically insensitive to the distribution of their duration times and to the variance of the duration times as long as the mean of the duration times remains the same. A rigorous proof is presented that this statement is valid even for two random events on a time interval. The paper also provides insight into various mechanisms through which deterministic separation improves reliability and reduces risk. It is demonstrated that the separation on properties is an efficient technique for compensating the drawbacks associated with homogeneous properties. It is demonstrated that improving reliability by including redundancy, improving reliability by segmentation and some of the deliberate weak link techniques and stress limiters techniques for reducing risk are effectively special cases of a deterministic separation. Finally, the paper demonstrates that in a number of cases, the way to extract benefit from the method of separation is to build and analyse a mathematical model based on the method of separation. A comprehensive classification of the discussed methods for stochastic and deterministic separation is also presented.
-
Todinov MT, 'Reliability and risk controlled by the simultaneous presence of random events on a time interval'
ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part B: Mechanical Engineering 4 (2) (2017)
ISSN: 2332-9017AbstractThe paper treats the important problem related to risk controlled by the simultaneous presence of critical events, randomly appearing on a time interval and shows that the expected time fraction of simultaneously present events does not depend on the distribution of events durations. In addition, the paper shows that the probability of simultaneous presence of critical events is practically insensitive to the distribution of the events durations. These counter-intuitive results provide the powerful opportunity to evaluate the risk of overlapping of random events through the mean duration times of the events only, without requiring the distributions of the events durations or their variance. A closed-form expression for the expected fraction of unsatisfied demand for random demands following a homogeneous Poisson process in a time interval is introduced for the first time. In addition, a closed-form expression related to the expected time fraction of unsatisfied demand, for a fixed number of consumers initiating random demands with a specified probability, is also introduced for the first time. The concepts stochastic separation of random events based on the probability of overlapping and the average overlapped fraction are also introduced. Methods for providing stochastic separation and optimal stochastic separation achieving balance between risk and cost of risk reduction are presented.Published here Open Access on RADAR -
Todinov M, 'Reducing Risk Through Inversion and Self-Strengthening'
International Journal of Risk and Contingency Management 6 (1) (2017) pp.14-42
ISSN: 2160-9625 eISSN: 2160-9632AbstractA number of new techniques for reliability improvement and risk reduction based on the inversion method, such as: ‘inverting design variables’, ‘inverting by maintaining an invariant’, ‘inverting resulting in a reinforcing counter-force’, ‘negating basic required functions’ and ‘moving backwards to general and specific contributing factors’ have been introduced for the first time. By using detailed calculations, it has been demonstrated how the new technique ‘repeated inversion maintaining an invariant’ can be applied to reduce the risk of collision for multiple ships travelling at different times and with variable speeds. It has been demonstrated that for pressure vessels, inversion of the geometric parameters by maintaining an invariant volume could result not only in an increased safety but also in a significantly reduced weight.Published here Open Access on RADAR
The method of self-strengthening has been introduced for the first time as a systematic method for improving reliability and reducing risk. The method of self-strengthening by capturing a proportional compensating factor and self-strengthening by creating a positive feedback loop have been proposed for the first time as reliability improvement tools. Finally, classifications have been proposed of methods and techniques for risk reduction based on the methods of inversion and self-strengthening. -
Todinov M, 'Improving reliability and reducing risk by separation'
International Journal of Risk and Contingency Management 6 (4) (2017) pp.16-39
AbstractThe paper introduces the method of separation for improving reliability and reducing technical risk and provides insight into the various mechanisms through which the method of separation attains this goal. A comprehensive classification of techniques for improving reliability and reducing risk, based on the method of separation has been proposed for the first time. From this classification, three principal categories of separation techniques have been identified: (i) assuring distinct functions/properties/behaviour for distinct components or parts (ii) assuring distinct properties/behaviour at distinct time, value of a parameter, conditions or scale and (iii) distancing risk-critical factors.Published here Open Access on RADARThe concept ‘stochastic separation’ of random events and methods for providing a stochastic separation have been introduced. It is shown that separation of properties is an efficient technique for compensating the drawbacks associated with a selection based on homogeneous properties. It is also demonstrated that the method of deliberate weak links and the method of segmentation can be considered as a special case of the method of separation. Finally, the paper demonstrates that the traditional reliability measure ‘safety margin’ is misleading and should not be used as a measure of the relative separation between load and strength
-
Todinov M, 'Reducing Risk by Segmentation'
International Journal of Risk and Contingency Management 6 (3) (2017) pp.27-46
ISSN: 2160-9624 eISSN: 2160-9632AbstractThe paper provides analysis of the various mechanisms through which the segmentation improves reliability and reduces technical risk and presents a classification of risk-reduction techniques based on segmentation. On the basis of theoretical arguments and examples, it is demonstrated that segmentation increases the tolerance of components to flaws causing local damage, reduces the rate of damage accumulation and damage escalation and reduces the hazard potential. The paper also demonstrates that segmentation essentially replaces a sudden failure on a macro-level with gradual deterioration of the system on a micro-level through non-critical failures. It is demonstrated that segmentation can even reduce the likelihood of a loss from opportunity bets and the likelihood of erroneous conclusion from imperfect tests. Finally, a comprehensive classification of methods and techniques for reducing risk, based on segmentation, has been proposed.Published here Open Access on RADAR -
Todinov MT, 'Stochastic pruning and its application for fast estimation of the expected total output of complex systems'
Electronic Notes in Theoretical Computer Science 327 (2016) pp.109-123
ISSN: 1571-0661AbstractA powerful method referred to as stochastic pruning is introduced for analysing the performance of common complex systems whose component failures follow a homogeneous Poisson process. The method has been applied to create a very fast solver for estimating the production availability of large repairable flow networks with complex topology. It is shown that the key performance measures production availability and system reliability are all properties of a stochastically pruned network with corresponding pruning probabilities. The high-speed solver is based on an important result regarding the average total output of a repairable system including components characterised by constant failure/hazard rates. The average output over a specified operation time interval is given by the ratio of the expected momentary output of the stochastically pruned system, where the separate components are pruned with probabilities equal to their unavailabilities, and the maximum momentary output in the absence of component failures. The running time of the algorithm for determining the expected total output of the system over a specified time interval is independent of the length of the operational interval and the failure frequencies of the edges. The high-speed solver has been embedded in a software tool, with graphics user interface by which a flow network topology is drawn on screen and the parameters characterising the edges and the nodes are easily specified. The software tool has been used to analyse a gas production network and to study the impact of the network topology on the network performance. It is shown that two networks built with identical type and number of components may have very different performance levels, because of slight differences in their topology.Published here Open Access on RADAR -
Todinov M, 'Evaluating the risk of unsatisfied demand on a time interval'
Artificial Intelligence Research 5 (1) (2016) pp.67-77
ISSN: 1927-6974 eISSN: 1927-6982AbstractThis paper focuses on an important and very common problem and presents a theoretical framework for solving it: “determining the risk of unsatisfied request from users placing random demands on a time interval”. For the common case of a single source servicing a number of consumers, a closed-form solution has been derived for the risk of collision of random demands. Based on the closed-form solution, an efficient optimisation method has been developed for determining the optimal number of consumers that can be serviced by a single source, such that the probability of unsatisfied demand remains below a maximal tolerable level. A central part of the proposed theoretical framework is a general equation evaluating the risk of unsatisfied demand by the expected fraction of time of unsatisfied demand. The derived equation covers multiple sources servicing multiple consumers. Finally, the conducted parametric studies revealed an unexpected finding: the risk of collision of random demands on a time interval is practically insensitive to the standard deviations of the durations of demands. This surprising result provides the valuable opportunity to work with random demand times characterised by their means only, without supplying their probability distributions or variances.Published here -
Todinov MT, Same A, 'A fracture condition incorporating the most unfavourable orientation of the crack'
International Journal of Mechanics and Materials in Design 11 (3) (2015) pp.243-252
ISSN: 1569-1713AbstractA fracture condition incorporating the most unfavourable orientation of the crack has been derived to improve the safety of loaded brittle components with complex shape, whose loading results in a three-dimensional stress state. With a single calculation, an answer is provided to the important question whether a randomly oriented crack at a particular location in the stressed component will cause fracture. Brittle fracture is a dangerous failure mode and requires a conservative design calculation. The presented experimental results show that the locus of stress intensity factors which result in mixed-mode fracture is associated with significant uncertainty. Consequently, a new approach to design of safety–critical components has been proposed, based on a conservative safe zone, located away from the scatter band defining fracture states. A postprocessor based on the proposed fracture condition and conservative safe zone can be easily developed, for testing loaded safety–critical components with complex shape. For each finite element, only a single computation is made, which guarantees a high computational speed. This makes the proposed approach particularly useful for incorporation in a design optimisation loop.Published here Open Access on RADAR -
Todinov MT, 'Reducing risk through segmentation, permutations, time and space exposure, inverse states, and separation'
International Journal of Risk and Contingency Management 4 (3) (2015) pp.1-21
ISSN: 2160-9624 eISSN: 2160-9632AbstractThe paper features a number of new generic principles for reducing technical risk with a very wide application area. Permutations of interchangeable components/operations in a system can reduce significantly the risk of system failure at no extra cost. Reducing the time of exposure and the space of exposure can also reduce risk significantly. Technical risk can be reduced effectively by introducing inverse states countering negative effects during service. The application of this principle in logistic supply networks leads to a significant reduction of the risk of congestion and delays. The associated reduction of transportation costs and environmental pollution has the potential to save billions of dollars to the world economy. Separation is a risk-reduction principle which is very efficient in the cases of separating functions to be carried out by different components and for blocking out a common cause. Segmentation is a generic principle for risk reduction which is particularly efficient in reducing the load distribution, vulnerability to a single failure, the hazard potential and damage escalation.Published here Open Access on RADAR -
Todinov M, 'The same sign local effects principle and its application to technical risk reduction'
International Journal of Reliability and Safety 9 (4) (2015)
ISSN: 1479-389XAbstractA simple yet powerful general risk-reduction principle has been formulated related to systems each state of which can be obtained from a given initial state by adding the effects from a specified set of modifications. An important application of the formulated principle has been found in determining the global extremum of multivariable functions whose partial derivatives maintain the same sign in a rectangular domain. The proposed generic principle has also been applied with success to minimise the transportation costs related to a set of interchangeable sources servicing a set of destinations. A counter-example has been given which demonstrates for the first time that selecting the nearest available source to supply the destinations along the shortest available paths does not guarantee an optimal solution. This counter-intuitive result is contrary to the long-standing and well-established practices in network optimisation. The application of the proposed generic principle in logistic supply networks leads to a significant reduction of the risk of congestion and delays.Published here Open Access on RADAR -
Todinov MT, 'Dominated parasitic flow loops in networks'
International Journal of Operations Research 11 (1) (2014) pp.1-17
ISSN: 1813-713X eISSN: 1813-7148AbstractThe paper introduces the concept ‘dominated parasitic flow loops’ and demonstrates that these occur naturally in real networks transporting interchangeable commodity. The dominated parasitic flow loops are augmentable broken loops which have a dominating flow in one particular direction of traversing. The dominated parasitic flow loops arePublished hereassociated with transportation losses, congestion and increased pollution of the environment and are highly undesirable in real flow networks.
The paper derives a necessary and sufficient condition for the non-existence of dominated parasitic flow loops in the case of presence of paths with zero and non-zero flow. The necessary and sufficient condition is at the basis of a method
for determining the probability of a dominated parasitic flow loop. The results demonstrate that the probability of a dominated parasitic flow loop is very large and increases very quickly with increasing the number of flow paths.
Dominated parasitic flow loops can be drained by augmenting them with flow, which results in an overall decrease of the transportation cost, without affecting the quantity of delivered commodity from sources to destinations. Accordingly, an
efficient algorithm for removing dominated parasitic flow loops has been presented and a number of important applications have been identified. The presented algorithm has the potential to save a significant amount of resources to the world economy.
-
Todinov MT, 'Optimal allocation of limited resources among discrete risk-reduction options'
Artificial Intelligence Research 3 (4) (2014)
ISSN: 1927-6974 eISSN: 1927-6982AbstractThis study exposes a critical weakness of the (0-1) knapsack dynamic programming approach, widely used for optimal allocationof resources. The (0-1) knapsack dynamic programming approach could waste resources on insignificant improvements andprevent the more efficient use of the resources to achieve maximum benefit. Despite the numerous extensive studies, this criticalshortcoming of the classical formulation has been overlooked. The main reason is that the standard (0-1) knapsack dynamicprogramming approach has been devised to maximise the benefit derived from items filling a space with no intrinsic value. While this is an appropriate formulation for packing and cargo loading problems, in applications involving capital budgeting, this formulation is deeply flawed. The reason is that budgets do have intrinsic value and their efficient utilisation is just as important as the maximisation of the benefit derived from the budget allocation.Published here Open Access on RADARAccordingly, a new formulation of the (0-1) knapsack resource allocation model is proposed where the weighted sum of thebenefit and the remaining budget is maximised instead of the total benefit. The proposed optimisation model produces solutionssuperior to both – the standard (0-1) dynamic programming approach and the cost-benefit approach.
On the basis of common parallel-series systems, the paper also demonstrates that because of synergistic effects, sets includingthe same number of identical options could remove different amount of total risk. The existence of synergistic effects doesnot permit the application of the (0-1) dynamic programming approach. In this case, specific methods for optimal resource allocation should be applied. Accordingly, the paper formulates and proves a theorem stating that the maximum amount of removed total risk from operations and systems with parallel-series logical arrangement is achieved by using preferentially the available budget on improving the reliability of operations/components belonging to the same parallel branch. Improving the reliability of randomly selected operations/components not forming a parallel branch leads to a sub-optimal risk reduction.The theorem is a solid basis for achieving a significant risk reduction for systems and processes with parallel-series logical arrangement.
-
Todinov MT, 'The throughput flow constraint theorem and its applications'
International Journal of Advanced Computer Science and Applications 5 (3) (2014)
ISSN: 2158-107XAbstractThe paper states and proves an important result related to the theory of flow networks with disturbed flows:“the throughput flow constraint in any network is always equal to the throughput flow constraint in its dual network”. After the failure or congestion of several edges in the network, the throughput flow constraint theorem provides the basis of a very efficient algorithm for determining the edge flows which correspond to the optimal throughput flow from sources to destinations which is the throughput flow achieved with the smallest amount of generation shedding from the sources. In the case where a failure of an edge causes a loss of the entire flow through the edge, the throughput flow constraint theorem permits the calculation of the new maximum throughput flow to be done in time, where m is the number of edges in the network.In this case, the new maximum throughput flow is calculated by inspecting the network only locally, in the vicinity of the failed edge, without inspecting the rest of the network. The superior average running time of the presented algorithm, makes it particularly suitable for decongesting overloaded transmission links of telecommunication networks, in real time.In the paper, it is also shown that the deliberate choking of flows along overloaded edges, leading to a generation of momentary excess and deficit flow, provides a very efficient mechanism for decongesting overloaded branches.Published here -
Todinov MT, 'Fast augmentation algorithms for maximising the output flow in repairable flow networks after edge failures'
International Journal of Systems Science 44 (10) (2013) pp.1807-1830-24
ISSN: 0020-7721AbstractThe article discuses a number of fundamental results related to determining the maximum output flow in a network after edge failures. On the basis of four theorems, we propose very efficient augmentation algorithms for restoring the maximum possible output flow in a repairable flow network, after an edge failure. In many cases, the running time of the proposed algorithm is independent of the size of the network or varies linearly with the size of the network. The high computational speed of the proposed algorithms makes them suitable for optimising the performance of repairable flow networks in real time and for decongesting overloaded branches in networks. We show that the correct algorithm for maximising the flow in a static flow network, with edges fully saturated with flow, is a special case of the proposed reoptimisation algorithm, after transforming the network into a network with balanced nodes. An efficient two-stage augmentation algorithm has also been proposed for maximising the output flow in a network with empty edges. The algorithm is faster than the classical flow augmentation algorithms. The article also presents a study on the link between performance, topology and size of repairable flow networks by using a specially developed software tool. The topology of repairable flow networks has a significant impact on their performance. Two networks built with identical type and number of components can have very different performance levels because of slight differences in their topology.Published here -
Todinov MT, 'New algorithms for optimal reduction of technical risk'
Engineering Optimization 45 (6) (2013) pp.719-743
ISSN: 0305-215X eISSN: 1029-0273AbstractThe article features exact algorithms for reduction of technical risk by (1) optimal allocation of resources in the case where the total potential loss from several sources of risk is a sum of the potential losses from the individual sources; (2) optimal allocation of resources to achieve a maximum reduction of system failure; and (3) making an optimal choice among competing risky prospects. The article demonstrates that the number of activities in a risky prospect is a key consideration in selecting the risky prospect. As a result, the maximum expected profit criterion, widely used for making risk decisions, is fundamentally flawed, because it does not consider the impact of the number of risk-reward activities in the risky prospects. A popular view, that if a single risk-reward bet with positive expected profit is unacceptable then a sequence of such identical risk-reward bets is also unacceptable, has been analysed and proved incorrect.Published here -
Todinov MT, 'Parasitic flow loops in networks'
International Journal of Operations Research 10 (3) (2013) pp.109-122
ISSN: 1813-713X eISSN: 1813-7148AbstractParasitic flow loops in real networks are associated with transportation losses, congestion and increased pollution of the environment. The paper shows that complex networks dispatching the same type of interchangeable commodity exhibit parasitic flow loops and the commodity does not need to be physically travelling around a closed contour for a parasitic flow loop to be present. Consequently, a theorem giving the necessary and sufficient condition for a parasitic flow loop on randomly oriented source-destination paths in a plane has been formulated and a simple expression has been obtained for the probability of a directed flow loop. A closed-form expression has also been derived for determining the probability of a parasitic flow loop on a fixed lattice with flows whose directions are random. The results demonstrate that even for a relatively small number of intersecting flow paths, the probability of a directed flow loop is very large, which shows that the existence of directed flow loops in large and complex networks is practically inevitable. Consequently, a simple and efficient recursive algorithm has also been proposed for discovering and removing parasitic flow loops in real networks. The paper also shows that for any possible number and for any possible orientation of straight-line flow paths on a plane, it is always possible to choose the flows in the paths in such a way, that no parasitic flow loops are present between the points of intersection.
In this paper, we also raise awareness of a fundamental flaw of algorithms for maximising the throughput flow published since 1956. They all leave highly undesirable parasitic flow loops in the optimised networks and are unsuitable for network optimisation without an additional stage aimed at removing them.
-
Todinov MT, 'The dual network theorem for static flow networks and its application for maximising the throughput flow'
Artificial Intelligence Research 2 (1) (2013) pp.81-106
ISSN: 1927-6974 eISSN: 1927-6982AbstractThe paper discuses a new fundamental result in the theory of flow networks referred to as the ‘dual network theorem forstatic flow networks’. The theorem states that the maximum throughput flow in any static network is equal to the sum ofthe capacities of the edges coming out of the source, minus the total excess flow at all excess nodes, plus the maximumthroughput flow in the dual network. For very few imbalanced nodes in a flow network, determining the throughput flowin the dual network is a task significantly easier than determining the throughput flow in the original network. This createsthe basis of a very efficient algorithm for maximising the throughput flow in a network, by maximising the throughputflow in its dual network.Consequently, a new algorithm for maximising the throughput flow in a network has been proposed. For networks withvery few imbalanced nodes, in the case where only the maximum throughput flow is of interest, the proposed algorithmwill outperform any classical method for determining the maximum throughput flow.In this paper we also raise awareness of a fundamental flaw in classical algorithms for maximising the throughput flow instatic networks with directed edges. Despite the years of intensive research on static flow networks, the classicalalgorithms leave undesirable directed loops of flow in the optimised networks. These directed flow loops are associatedwith wastage of energy and resources and increased levels of congestion in the optimised networks. Consequently, analgorithm is also proposed for discovering and removing directed loops of flow in networks.Published here Open Access on RADAR -
Todinov M, 'Algorithms for minimising the lost flow due to failed components in repairable flow networks with complex topology'
International Journal of Reliability and Safety 6 (4) (2012) pp.283-310
ISSN: 1479-389XAbstractA number of fundamental theorems related to non-reconfigurable repairable flow networks, have been stated and proved. For a specified source-to-sink path, the difference between the sum of the unavailabilities of its forward edges and the sum of the unavailabilities of its backward edges is the path resistance. In a repairable flow network, the absence of augmentable cyclic paths with negative resistance is a necessary and sufficient condition for a minimum lost flow due to edge failures. For a specified source-to-sink path, the difference between the sum of the hazard rates of its forward empty edges and the sum of the hazard rates its backward empty edges is the flow disruption number of the path. The absence of augmentable cyclic paths with a negative flow disruption number is a necessary and sufficient condition for a minimum probability of undisturbed throughput flow, by edge failures.Published here -
Todinov MT, 'Topology optimisation of repairable flow networks for a maximum average availability'
Computers & Mathematics with Applications 64 (12) (2012) pp.3729-3746
ISSN: 0898-1221 eISSN: 1873-7668AbstractWe state and prove a theorem regarding the average production availability of a repairable flow network, composed of independently working edges, whose failures follow a homogeneous Poisson process. The average production availability is equal to the average of the maximum output flow rates on demand from the network, calculated after removing the separate edges with probabilities equal to the edges unavailabilities. This result creates the basis of extremely fast solvers for the production availability of complex repairable networks, the running time of which is independent of the length of the operational interval, the failure frequencies, or the lengths of the downtimes for repair. The computational speed of the production availability solver has been extended further by a new algorithm for maximising the output flow in a network after the removal of several edges, which does not require determining the feasible edge flows in the network. The algorithm for maximising the network flow is based on a new theorem, referred to as ‘the maximum flow after edge failures theorem’, stated and proved for the first time.Published hereFinally, unlike heuristic optimisation algorithms, the proposed algorithm for a topology optimisation of the network always determines the optimal solution.
The high computational speed of the developed production availability solver created the possibility for embedding it in simulation loops, performing a topology optimisation of large and complex repairable networks, aimed at attaining a maximum average availability within a specified budget for building the network. An exact optimisation method has been proposed, based on pruning the full-complexity network by using the branch and bound method as a way of exploring possible network topologies. This makes the proposed algorithm much more efficient, compared to an algorithm implementing a full exhaustive search. In addition, the proposed method produces an optimal solution compared to heuristic optimisation methods.
The application of the bound and branch method is possible because of the monotonic dependence of the production availability on the number of the edges pruned from the full-complexity network.
-
Todinov M, 'Analysis and optimization of repairable flow networks with complex topology'
IEEE Transactions on Reliability 60 (1) (2011) pp.111-124
ISSN: 0018-9529 eISSN: 1558-1721AbstractWe propose a framework for analysis and optimization of repairable flow networks by (i) stating and proving the maximum flow minimum flow path resistance theorem for networks with merging flows (ii) a discrete-event solver for determining the variation of the output flow from repairable flow networks with complex topology (iii) a procedure for determining the threshold flow rate reliability for repairable networks with complex topology (iv) a method for topology optimization of repairable flow networks and (v) an efficient algorithm for maximizing the flow in non-reconfigurable flow networks with merging flows. Maximizing the flow in a static flow network does not necessarily guarantee that the flow in the corresponding non-reconfigurable repairable network will be maximized. In this respect, we introduce a new concept related to repairable flow networks:"a specific resistance of a flowpath" which is essentially the average percentage of losses from component failures for a flowpath fromthe source to the sink.Avery efficient algorithm based on adjacency arrays has also been proposed for determining all minimal flow paths in a network with complex topology and cycles. We formulate and prove a fundamental theorem about non-reconfigurable repairable flow networks with merging flows. The flow in a repairable flow network with merging flows can be maximized by preferentially saturating directed flow paths from the sources to the sink, characterized by the largest average availability. The procedure starts with the flow path with the largest average availability (the smallest specific resistance), and continues by saturating the unsaturated directed flow path with the largest average availability until no more flow paths can be saturated. A discrete-event solver for reconfigurable repairable flow networks with complex topology has also been constructed. The proposed discrete-event solver maximizes the flow rate in the network upon each component failure and return from repair. By maximizing the flow rate upon each component failure and return from repair, the discrete-event solver ensures a larger total output flow during a specified time interval. The designed simulation procedure for determining the threshold flowrate reliability is particularly useful for comparing flow network topologies, and selecting the topology characterized by the largest threshold flow rate reliability. It is also very useful in deciding whether the resources allocated for purchasing extra redundancy are justified. Finally, we propose a new optimization method for determining the network topology combining a maximum output flow rate attained within a specified budget for building the network. The optimization method is based on a branch and bound algorithm combined with pruning the full-complexity network as a way of exploring the possible repairable networks embedded in the full-complexity network.Published here -
Todinov M, 'The cumulative stress hazard density as an alternative to the Weibull model'
International Journal of Solids and Structures 47 (24) (2010) pp.3286-3296
ISSN: 0020-7683AbstractA simple, easily reproduced experiment based on artificial flaws has been proposed which demonstrates that the distribution of the minimum failure load does not necessarily follow a Weibull distribution. The experimental result presented in the paper clearly indicates that the Weibull distribution with its strictly increasing function, is incapable of approximating a constant probability of failure over a loading region. New fundamental concepts have been introduced referred to as 'hazard stress density' and 'cumulative hazard stress density'. These concepts helped derive an equation giving the probability of failure without making use of the notions 'flaws' and 'locally initiated failure by flaws'. As a result, the derived equation is more general than earlier models. The cumulative hazard stress density is an important fingerprint of materials and can be used for determining the reliability of loaded components. It leaves materials to 'speak for themselves' by not imposing a power law dependence on the variation of the critical flaws which is always the case if the Weibull model is used. An important link with earlier models has also been established. We show that the cumulative hazard stress density is numerically equal to the product of the number density of the flaws with a potential to cause failure and the probability that a flaw will be critical at the specified loading stress. We show that, predictions of the probability of failure from tests related to a small gauge length to a large gauge length are associated with large errors which increase in proportion with the ratio of the gauge lengths. Large gauge length ratios amplify the inevitable errors in the probability of failure associated with the small gauge length to a level which renders the predicted probability for failure of the large gauge length meaningless. Finally, a general integral has been derived, giving the reliability associated with time interval and random loading of a material with flaws. The integral has been validated by a Monte Carlo simulation.Published here -
Todinov M, 'Is Weibull distribution the correct model for predicting probability of failure initiated by non-interacting flaws?'
International Journal of Solids and Structures 46 (3-4) (2009) pp.887-901
ISSN: 0020-7683AbstractThe utility of the Weibull distribution has been traditionally justified with the belief that it is the mathematical expression of the weakest-link concept in the case of flaws locally initiating failure in a stressed volume. This paper challenges the Weibull distribution as a mathematical formulation of the weakest-link concept and its suitability for predicting probability of failure locally initiated by flaws. The paper shows that the Weibull distribution predicts correctly the probability of failure locally initiated by flaws if and only if the probability that a flaw will be critical is a power law or can be approximated by a power law of the applied stress. Contrary to the common belief, on the basis of a theoretical analysis and Monte Carlo simulations we show that in general, for non-interacting flaws randomly located in a stressed volume, the distribution of the minimum failure stress is not necessarily a Weibull distribution. For the simple cases of a single group of identical flaws or two flaw size groups each of which contains identical flaws, for example, the Weibull distribution fails to predict correctly the probability of failure. Furthermore, if in a particular load range, no new critical flaws are created by increasing the applied stress, the Weibull distribution also fails to predict correctly the probability of failure of the component. In all these cases however, the probability of failure is correctly predicted by the suggested alternative equation. This equation is the correct mathematical formulation of the weakest-link concept related to random flaws in a stressed volume. The equation does not require any assumption concerning the physical nature of the flaws and the physical mechanism of failure and can be applied in cases of locally initiated failure by non-interacting entities.Published here -
Todinov M, 'Robust design using variance upper bound theorem'
International Journal of Performability Engineering 5 (4) (2009) pp.339-356
ISSN: 0973-1318AbstractThe exact upper bound of the variance of properties from multiple sources is attained from sampling not more than two sources. This paper discusses important applications of this result referred to as variance upper bound theorem. A new conservative, non-parametric estimate has been proposed for the capability index of a process whose output combines contributions from multiple sources of variation. A new method for assessing and increasing the robustness of processes, operations and products where the mean value can be easily adjusted or is not critical has been presented, based on the variance upper bound theorem. We show that the worst-case variation of a property from multiple sources, obtained by using the variance upper bound theorem, can be used as a basis for developing robust engineering designs and products. If a design is capable of accommodating the worst-case variation of the reliability-critical parameters, it will also be capable of accommodating the variation of the reliability-critical parameters from any combination of sources of variation and mixing proportions. In this respect, a new algorithm for virtual testing based on the variance upper bound theorem has been proposed for determining the probability of a faulty assembly from multiple sources. For sources of variation that can be removed, the robustness can be improved further, by removing the source that yields the largest decrease in the variance upper bound. Consequently, the correspondent algorithm is also presented. A number of engineering applications have been discussed where the variance upper bound theorem can be used to assess and increase the robustness of mechanical and electrical components, manufacturing processes and operations.Published here -
Todinov M, 'Potential benefit, potential loss and potential gain from competing opportunity and failure events'
International Journal of Risk Assessment and Management 10 (40940) (2008)
ISSN: 1466-8297AbstractA quantitative framework is presented dealing with competing opportunity and failure events in a finite time interval. The framework is based on the new fundamental concepts potential benefit, potential loss and potential gain, for which closed-form expressions regarding their distributions are derived and verified by a simulation. It is demonstrated that a decision strategy based on multiple event occurrences yields a very different gain compared with a decision strategy based on the next event occurrence. The results are illustrated by examples supporting decision making.Published here -
Todinov M, 'Risk-based design on limiting the probability if system failure at a minimum total cost'
Risk Management 10 (2) (2008) pp.104-121
ISSN: 1460-3799AbstractA basic principle for risk-based design has been formulated: the larger the losses from failure of a component, the smaller the upper bound of its hazard rate, the larger the required minimum reliability level from the component. A generalized version and analytical expression for this important principle have also been formulated for multiple failure modes. It is argued that the traditional approach based on a risk matrix is suitable only for single failure modes/scenarios. In the case of multiple failure modes (scenarios), the individual risks should be aggregated and compared with the maximum tolerable risk. In this respect, a new method for risk-based design is proposed, based on limiting the probability of system failure below a maximal acceptable level at a minimum total cost (the sum of the cost for building the system and the risk of failure). The essence of the method can be summarized in three steps: developing a system topology with the maximum possible reliability, reducing the resultant system to a system with generic components, for each of which several alternatives exist including non-existence of the component, and a final step involving selecting a set of alternatives limiting the probability of system failure at a minimum total cost. An exact recursive algorithm for determining the set of alternatives for the components is also proposed.Published here -
Todinov M, 'A comparative method for improving the reliability of brittle components'
Nuclear Engineering and Design 239 (2) (2008) pp.214-220
ISSN: 0029-5493AbstractCalculating the absolute reliability built in a product is often an extremely difficult task because of the complexity of the physical processes and physical mechanisms underlying the failure modes, the complex influence of the environment and the operational loads, the variability associated with reliability-critical design parameters and the non-robustness of the prediction models. Predicting the probability of failure of loaded components with complex shape for example is associated with uncertainty related to: the type of existing flaws initiating fracture, the size distributions of the flaws, the locations and the orientations of the flaws and the microstructure and its local properties. Capturing these types of uncertainty, necessary for a correct prediction of the reliability of components is a formidable task which does not need to be addressed if a comparative reliability method is employed, especially if the focus is on reliability improvement. The new comparative method for improving the resistance to failure initiated by flaws proposed here is based on an assumed failure criterion, an equation linking the probability that a flaw will be critical with the probability of failure associated with the component and a finite element solution for the distribution of the principal stresses in the loaded component. The probability that a flaw will be critical is determined directly, after a finite number of steps equal to the number of finite elements into which the component is divided. An advantage of the proposed comparative method for improving the resistance to failure initiated by flaws is that it does not rely on a Monte Carlo simulation and does not depend on knowledge of the size distribution of the flaws and the material properties. This essentially eliminates uncertainty associated with the material properties and the population of flaws. On the basis of a theoretical analysis we also show that, contrary to the common belief, in general, for non-interacting flaws randomly located in a stressed volume, the distribution of the minimum failure stress is not necessarily described by a Weibull distribution. For the simple case of a single group of flaws all of which become critical beyond a particular threshold value for example, the Weibull distribution fails to predict correctly the probability of failure. If in a particular load range, no new critical flaws are created by increasing the applied stress, the Weibull distribution also fails to predict correctly the probability of failure of the component. In these cases however, the probability of failure is correctly predicted by the suggested alternative equation. The suggested equation is the correct mathematical formulation of the weakest-link concept related to random flaws in a stressed volume. The equation does not require any assumption concerning the physical nature of the flaws and the physical mechanism of failure and can be applied in any situation of locally initiated failure by non-interacting entities.Published here -
Todinov M, 'Efficient algorithm and discrete-event solver for stochastic flow networks with converging flows'
International Journal of Reliability and Safety 2 (4) (2008) pp.286-308
ISSN: 1479-389XAbstractAn efficient algorithm is proposed for determining the quantity of transferred flow and the losses from failures of repairable stochastic networks with converging flows. We show that the computational speed related to determining the variation of the flow through a stochastic flow network can be improved enormously if the topology of the network is exploited directly. The proposed algorithm is based on a new result related to maximising the flow in networks with converging flows. An efficient discrete-event solver for repairable networks with converging flows has also been developed, based on the proposed algorithm. The solver handles repairable networks with multiple sources of production flow, multi-commodity flows, overlapping failures, multiple failure modes, redundant components and redundant branches of components. The solver is capable of tracking the cumulative distribution of the potential losses from failures associated with the whole network and with each component in the network.Published here -
Iacopino G, Todinov M, 'Monte Carlo simulation of multiaxial fracture in brittle components containing flaws'
Operation Maintenance and Materials Issues 5 (2) (2008) pp.1-17
ISSN: 1740-5181AbstractAnalysis of the effect of the variability associated with the material microstructure, due to the presence of flaws such as inclusions and pores, on the strength distribution of mechanical components is conducted. For this purpose, a computational procedure, based on the coupled use of Finite Element Analysis and Monte Carlo Simulation, is proposed to evaluate the failure probability of mechanical components. Finite element analysis is employed to determine the stress field generated by the applied load. The random distribution of flaws in the material mi-crostructure is modelled by a homogenous Poisson process and its effect on the probability of failure evaluated by a Monte Carlo simulation. A mixed-mode fracture criterion, based on the coplanar strain-energy release rate is used to establish whether a flaw is unstable. The proposed model has been applied to determine the probability of failure initiated by flaws for a turbine blade. For various loading configurations, the component strength distribution has been evalu-ated. The effect of the random distribution of flaws is analysed and discussed. The proposed ap-proach allows the designer to identify the regions in the component characterised by a high probability of initiating fracture. -
Todinov M, 'Selecting designs with high resistance to overstress failure initiated by flaws'
Computational Materials Science 42 (2008) pp.306-315
ISSN: 0927-0256AbstractA powerful new technology is proposed for creating reliable and robust designs, characterized by a high resistance to failure. The new technology is based on a new mixed-mode failure criterion, and computationally very efficient simulation technique for calculating the probability of failure of a component with complex shape. The new technology handles design alternatives with complex shape and arbitrary loading. For each design shape or a loading alternative, a finite element model is created by using a standard finite element package. Next, a specially designed postprocessor reads the output files from the static stress analyses and calculates the probability of failure associated with each design alternative. Finally, the design alternative characterised by the smallest probability of failure is selected. Limitations of existing approaches to statistics of failure locally initiated by flaws are also discussed. Central to the traditional approaches is the assumption that the number density of the critical flaws is a power function of the applied stress. In this paper, on the basis of counter-examples, we show that for a material with flaws, the power law assumption does not hold in common cases, such as spherical flaws in a homogeneous matrix.Published here -
Todinov M, 'Risk-based reliability allocation and topological optimisation based on minimising the total cost'
International Journal of Reliability and Safety 1 (4) (2007) pp.489-512
ISSN: 1479-389XAbstractA new method for optimisation of the topology of engineering systems is proposed, based on reliability allocation by minimising the total cost - the sum of the cost for building the system and the risk of failure. The essence of the proposed method can be summarised in three steps: developing a system topology with the maximum possible reliability; reducing the resultant system to a system with generic components, for each of which several alternatives exist; and a third step that involves reliability allocation minimising the total cost. A heuristic optimisation algorithm and an exact recursive algorithm are also proposed. Central to the proposed methods is an efficient algorithm for determining the probability of system failure. The proposed algorithms are generic and applicable to any engineering system. They are very efficient for topologically complex reliability networks containing a large number of nodes.Published here -
Todinov M, 'An efficient algorithm for determining the risk of structural failure locally initiated by faults '
Probabilistic Engineering Mechanics 22 (1) (2006) pp.12-21
ISSN: 0266-8920AbstractAn efficient algorithm has been proposed for determining the probability of failure of structures containing flaws. The algorithm is based on a powerful generic equation, a central parameter in which is the conditional individual probability of initiating failure by a single flaw. The equation avoids conservative predictions related to the probability of locally initiated failure and is a powerful alternative to existing approaches. It is based on the concept of"conditional individual probability of initiating failure" characterising a single fault, which permits us to relate in a simple fashion the conditional individual probability of failure characterising a single fault to the probability of failure characterising a population of faults. A method for estimating the conditional individual probability has been proposed based on combining a Monte Carlo simulation and a failure criterion. The generic equation has been modified to determine the probability of fatigue failure initiated by flaws. Other important applications discussed in the paper also include: comparing different types of loading and selecting the type of loading associated with the smallest probability of over-stress failure; optimizing designs by minimizing their vulnerability to over-stress failure initiated by flaws; determining failure triggered by random faults in a large system and determining the probability of overloading of a supply system from random demands.Published here -
Todinov MT, 'Equations and a fast algorithm for determining the probability of failure initiated by flaws'
International Journal of Solids and Structures 43 (17) (2006) pp.5182-5195
ISSN: 0020-7683AbstractPowerful equations and an efficient algorithm are proposed for determining the probability of failure of loaded components with complex shape, containing multiple types of flaws. The equations are based on the concept ‘conditional individual probability of initiating failure’ characterising a single flaw given that it is in the stressed component. The proposed models relate in a simple fashion the conditional individual probability of failure characterising a single flaw (estimated by a Monte Carlo simulation) to the probability of failure characterising a population of flaws. The derived equations constitutes the core of a new statistical theory of failure initiated by flaws in the material, with important applications in optimising designs by decreasing their vulnerability to failure initiated by flaws during overloading or fatigue cycling.Published hereMethods have also been developed for specifying the maximum acceptable level of the flaw number density and the maximum size of the stressed volume which guarantee that the probability of failure initiated by flaws remains below a maximum acceptable level. An important parameter referred to as ‘detrimental factor’ is also introduced. Components with identical geometry and material, with the same detrimental factors are characterised by the same probability of failure. It is argued that eliminating flaws from the material should concentrate on types of flaws characterised by large detrimental factors.
The equations proposed avoid conservative predictions resulting from equating the probability of failure initiated by a flaw in a stressed region with the probability of existence of the flaw in that region.
-
Todinov MT, 'Reliability analysis of complex systems based on the losses from failures'
International Journal of Reliability, Quality and Safety Engineering 13 (2) (2006) pp.127-148
ISSN: 0218-5393 eISSN: 1793-6446AbstractThe conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. In this paper it is demonstrated that increasing the reliability of the system does not always mean decreasing the losses from failures. An inappropriate increase of the reliability of the system may lead to a simultaneous increase of the losses from failure. In other words, a system reliability improvement, which is disconnected from the losses from failure does not necessarily reduce the losses from failures.Published hereAn efficient discrete-event simulation model and algorithm have been proposed for reliability analysis based on the losses from failure for production systems with complex topology. The model links reliability with losses from failures. A new algorithm has also been developed for system reliability analysis related to productions systems based on multiple production units where the absence of critical failure means that at least m out n production units are working.
The parametric study conducted on the basis of the developed models revealed that a dual-control production system is characterized by enhanced production availability, which increases with increasing the number of production units in the system. A production unit from a dual-control production system including multiple production units is characterized by a larger availability compared to a production unit from a dual-control production system including a single production unit.
The proposed approach has been demonstrated by comparing the losses from failures and the net present values of two competing design topologies: one based on a single-channel control and the other based on a dual-channel control. The proposed models have been successfully applied and tested for reliability value analysis of productions systems in deepwater oil and gas production.
It is also argued that the reliability allocation in a production system should be done to maximize the net profit/value obtained from the system. Consequently, a method for setting reliability requirements and reliability allocation maximizing the net profit by minimizing the sum of the capital costs and the expected losses from failures has been proposed. Reliability allocation which maximizes the net profit in case of a system consisting of blocks arranged in series is achieved by determining for each block individually, the reliabilities of the components which minimize the sum of the capital costs and the expected losses from failures.
-
Todinov MT, 'Reliability value analysis of complex production systems based on the losses from failures'
International Journal of Quality & Reliability Management 23 (6) (2006) pp.696-718
ISSN: 0265-671XAbstractPurposePublished here– The aim of this paper is to propose efficient models and algorithms for reliability value analysis of complex repairable systems linking reliability and losses from failures.
Design/methodology/approach
– The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. In this paper it is demonstrated that a system with larger reliability does necessarily mean a system with smaller losses from failures. In other words, a system reliability improvement, which is disconnected from the losses from failures does not necessarily reduce the losses from failures. An efficient discrete‐event simulation model and algorithm are proposed for tracking the losses from failures for systems with complex topology. A new algorithm is also proposed for system reliability analysis related to productions systems based on multiple production units where the absence of a critical failure means that at least m out n production units are working.
Findings
– A model for determining the distribution of the net present value (NPV) characterising the production systems is developed. The model has significant advantages compared to models based on the expected value of the losses from failures. The model developed in this study reveals the variation of the NPV due to variation of the number of critical failures and their times of occurrence during the entire life‐cycle of the systems.
Practical implications
– The proposed models have been successfully applied and tested for reliability value analysis of productions systems in deepwater oil and gas production.
Originality/value
– The proposed approach has been demonstrated by comparing the losses from failures and the NPVs of two competing design topologies: one based on a single‐channel control and the other based on a dual‐channel control.
Books
-
Todinov M, Interpretation of Algebraic Inequalities: Practical Engineering Optimisation and Generating New Knowledge, CRC Press (2021)
ISBN: 9781032059174 eISBN: 9781003199830AbstractOpen Access on RADARThis book introduces a new powerful method based on algebraic inequalities for optimising engineering systems and processes, with applications in mechanical engineering, electrical engineering, reliability engineering, risk management and operational research.
The book shows that the application potential of algebraic inequalities in engineering and technology is far-reaching and certainly not limited to specifying design constraints. Algebraic inequalities are capable of handing deep unstructured uncertainty associated with design variables and control parameters. With the method presented in this book, powerful new knowledge about systems and processes can be generated through meaningful interpretation of algebraic inequalities. By covering various types of algebraic inequalities suitable for interpretation, the book demonstrates how the generated knowledge can be applied for enhancing system and process performance. Depending on the specific interpretation, knowledge, applicable to systems and processes from diverse application domains, can be generated from the same algebraic inequality. Furthermore, an important class of algebraic inequalities is introduced that can be used for optimising systems and processes in any area of science and technology provided that the variables and separate terms of the inequalities stand for additive quantities.
With the presented method and various examples, the book will be of interest to engineers, students and researchers in the fields of optimisation, mechanical and electrical engineering, reliability engineering, risk management and operational research.
-
Michael T. Todinov, RISK AND UNCERTAINTY REDUCTION BY USING ALGEBRAIC INEQUALITIES, CRC Press (2020)
ISBN: 9780367898007 eISBN: 9781003032502AbstractPublished hereThis book covers the application of algebraic inequalities for reliability improvement and for uncertainty and risk reduction. It equips readers with powerful domain-independent methods for reducing risk based on algebraic inequalities and demonstrates the significant benefits derived from the application for risk and uncertainty reduction.
Algebraic inequalities:
• Provide a powerful reliability improvement, risk and uncertainty reduction method that transcends engineering and can be applied in various domains of human activity
• Present an effective tool for dealing with deep uncertainty related to key reliability-critical parameters of systems and processes
• Permit meaningful interpretations which link abstract inequalities with the real world
• Offer a tool for determining tight bounds for the variation of risk-critical parameters and complying the design with these bounds to avoid failure
• Allow optimising designs and processes by minimising the deviation of critical output parameters from their specified values and maximising their performance
This book is primarily for engineering professionals and academic researchers in virtually all existing engineering disciplines.
-
M.Todinov, Methods for reliability improvement and risk reduction, Wiley (2019)
ISBN: 9781119477587 eISBN: 9781119477624AbstractPublished here Open Access on RADARReliability is one of the most important attributes for the products and processes of any company or organization. This important work provides a powerful framework of domain-independent reliability improvement and risk reducing methods which can greatly lower risk in any area of human activity. It reviews existing methods for risk reduction that can be classified as domain-independent and introduces the following new domain-independent reliability improvement and risk reduction methods: Separation Stochastic separation Introducing deliberate weaknesses Segmentation Self-reinforcement Inversion Reducing the rate of accumulation of damage Permutation Substitution Limiting the space and time exposure Comparative reliability models The domain-independent methods for reliability improvement and risk reduction do not depend on the availability of past failure data, domain-specific expertise or knowledge of the failure mechanisms underlying the failure modes. Through numerous examples and case studies, this invaluable guide shows that many of the new domain-independent methods improve reliability at no extra cost or at a low cost. Using the proven methods in this book, any company and organisation can greatly enhance the reliability of its products and operations.--Supplied by publisher.
-
Todinov M, Reliability and Risk Models: Setting Reliability Requirements, Wiley (2015)
ISBN: 978-1-118-87332-8 -
Todinov MT, Flow networks, Elsevier (2013)
ISBN: 978-0-12-398396-1 eISBN: 9780123984067AbstractThis book develops the theory, algorithms and applications related to repairable flow networks and networks with disturbed flows. -
Todinov MT, Risk-based reliability analysis and generic principles for risk reduction, Elsevier (2007)
ISBN: 9780080447285 eISBN: 9780080467559
Book chapters
-
Todinov MT, 'Virtual accelerated life testing of complex systems' in Bouvry P, Gonzalez-Velez H, Kolodziej J (ed.), Intelligent decision systems in large-scale distributed environment, Springer (2011)
ISBN: 978-3-642-21270-3 eISBN: 978-3-642-21271-0AbstractA method has been developed for virtual accelerated testing of complex systems. Part of the method are an algorithm and a software tool for extrapolating the life of a complex system from the accelerated lives of its components. This makes the expensive task of building test rigs for life testing of complex engineering systems unnecessary and reduces drastically the amount of time and resources needed for accelerated life testing of complex systems. The impact of the acceleration stresses on the reliability of a complex system can also be determined by using the developed method. The proposed method is based on Monte Carlo simulation and is particularly suitable for topologically complex systems, containing a large number of components. Part of the method is also an algorithm for finding paths in complex networks. Compared to existing path-finding algorithms, the proposed algorithm determines the existence of paths to multiple end nodes and not only to a single end node. This makes the proposed algorithm ideal for revealing the reliability of engineering systems where more than a single operating component is controlled.Published here -
Todinov M, 'A new criterion for design of brittle components and for assessing their vulnerability to brittle fracture' in Guedes Soares, C (ed.), Advances in Safety, Reliability and Risk Management, Springer Verlag (Germany) (2011)
ISSN: 0376-9429 ISBN: 978-0-415-68379-1 eISBN: 978-0-203-13510-5Published here
Conference papers
-
Todinov MT, 'On two optimisation problems related to unsatisfied demand on a time interval'
(2016) pp.1505-1515
ISBN: 978-84-608-6082-2AbstractThis paper focuses on two important optimisation problems: (i) the maximum size of the system that can be serviced by a given number of sources so that the unsatisfied demand does not exceed a tolerable level and (ii) the minimum number of sources needed to service random demands so that the unsatisfied demand does not exceed a tolerable level. To solve these problems, a computational framework for determining the expected fraction of unsatisfied demand on a time interval has been created and closed-form solutions for the expected fraction of unsatisfied demand have been derived.Published here -
Todinov M, 'Maximising the Amount of Transmitted Flow Through Repairable Flow Networks'
(2012) pp.163-168
ISBN: 978-1-4244-6614-6AbstractA fundamental theorem related to maximizing the flow in a repairable flow network with arbitrary topology has been stated and proved. `The flow transmitted through a repairable network with arbitrary topology and a single source and sink can be maximized by (i) determining, all possible flow paths from the start node (the source) to the end node (the sink); (ii) arranging the flow paths in ascending order according to their specific flow path resistance and (iii) setting up the flow in the network by a sequential saturation of the flow paths starting with the one with the smallest specific resistance, until the entire flow network is saturated'. Based on the proved theorem, a new method for maximizing the flow in repairable flow networks has been proposed. The method is based on the new concept `specific resistance of a flow path'. Finally, a new stochastic optimization method has been proposed for determining the network topology combining a maximum flow and minimum cost.Published here -
Todinov M, 'Fast augmentation algorithms for maximizing the output flow in repairable flow networks after a component failure'
IEEE Transactions on Reliability (2011) pp.505-512
ISSN: 0018-9529 ISBN: 978-1-4577-0383-6 eISBN: 978-0-7695-4388-8AbstractThe paper discusses new, very efficient augmentation algorithms and theorems related to maximising the flow in single-commodity and multi-commodity networks. For the first time, efficient algorithms with linear average running time O(m) in the size m of the network, are proposed for restoring the maximum flow in single-commodity and multi-commodity networks after a component failure. The proposed algorithms are particularly suitable for discrete-event simulators of repairable production networks whose analysis requires generating thousands of simulation histories, each including hundreds of component failures. In this respect, a new, very efficient augmentation method with linear running time has been proposed for restoring the maximum output flow of oil in oil and gas production networks, after a component failure. Another important application of the proposed algorithms is in networks controlled in real time, where upon failure, the network flows need to be redirected quickly in order to maintain a maximum output flow.Published here -
Todinov M, 'A Discrete-event Solver for Repairable Flow Networks With Complex Topology'
(2010) pp.232-237
ISBN: 978-1-4244-7837-8 eISBN: 978-0-7695-4158-7AbstractThe paper presents a discrete-event simulator of repairable flow networks with complex topology. The solver is based on an efficient algorithm for maximizing the flow in repairable flow networks with complex topology. The discrete-event solver maximizes the flow through the repairable network upon each component failure and return from repair. This ensures a larger output flow compared to a flow maximization conducted on the static flow network. Because of the flow maximization upon failure and return from repair, the simulator also tracks naturally the variation of the output flow from multiple overlapping failures. The discrete-event solver determines the basic performance characteristic of repairable flow networks - the expected output flow delivered during a specified time interval in the presence of component failures.Published here
Further details
Other experience
- CRANFIELD UNIVERSITY (2005-2006), HEAD OF RISK AND RELIABILITY
Leading the research, consultancy and teaching in the area of Reliability, Risk and Uncertainty modelling in the School of Applied Sciences, Cranfield University - CRANFIELD UNIVERSITY (2002-2004), BP LECTURER IN RELIABILITY ENGINEERING AND RISK MANAGEMENT
Research, consultancy, teaching, and supervision in the area of Reliability, Risk and Uncertainty quantification in the School of Applied Sciences - THE UNIVERSITY OF BIRMINGHAM (1994-2001), RESEARCH SCIENTIST
Managed and conducted research in the area of uncertainty modelling related to fracture and fatigue; modelling the uncertainty in the location of the ductile-to-brittle region of nuclear pressure vessel steels; probability of fracture initiated by flaws; improving the reliability of mechanical components through mathematical modelling. - TECHNICAL UNIVERSITY OF SOFIA (1989-1994), BULGARIA, RESEARCH SCIENTIST
Managed a number of research projects in the area of modelling and simulation of heat and mass transfer and modelling phase transformation kinetics. Successfully accomplished a challenging project related to optimal cutting of sheet and bar stock in mass production. Most of the projects were funded by the Bulgarian Ministry of Science and Education and Bulgarian industry.