Author: saqibkhan

  • Software Maintenance in Software Engineering

    Software maintenance is a part of the Software Development Life Cycle. Its primary goal is to modify and update software application after delivery to correct errors and to improve performance. Software is a model of the real world. When the real world changes, the software require alteration wherever possible.

    Software Maintenance is an inclusive activity that includes error corrections, enhancement of capabilities, deletion of obsolete capabilities, and optimization.

    Numerous Important Elements of Software Maintenance:

    • Bug fixing is the process of locating and resolving software flaws.
    • The process of introducing new features or making improvements to current ones in order to satisfy users changing needs is known as
    • Performance optimization is the process of making software faster, more reliable, and more efficient.
    • The process of modifying software to run on new hardware or software platforms is known as porting and migration.
    • Enhancing the software architecture and design to make it more scalable and maintainable is known as re-engineering.
    • The process of producing, revising, and keeping up with the software documentation which includes design documents, technical specifications, and user manuals.

    Need for Maintenance

    Software Maintenance is needed for:-

    • Correct errors
    • Change in user requirement with time
    • Changing hardware/software requirements
    • To improve system efficiency
    • To optimize the code to run faster
    • To modify the components
    • To reduce any unwanted side effects.

    Thus the maintenance is required to ensure that the system continues to satisfy user requirements.

    Types of Software Maintenance

    Software Maintenance

    1. Corrective Maintenance

    Corrective maintenance aims to correct any remaining errors regardless of where they may cause specifications, design, coding, testing, and documentation, etc.

    2. Adaptive Maintenance

    It contains modifying the software to match changes in the ever-changing environment.

    3. Preventive Maintenance

    It is the process by which we prevent our system from being obsolete. It involves the concept of reengineering & reverse engineering in which an old system with old technology is re-engineered using new technology. This maintenance prevents the system from dying out.

    4. Perfective Maintenance

    It defines improving processing efficiency or performance or restricting the software to enhance changeability. This may contain enhancement of existing system functionality, improvement in computational efficiency, etc.

    Difficulties with software Maintenance:

    The following lists are the different difficulties in software maintenance.

    • Any software programs popular age is considered for a maximum of 10-15 years. Software program renovation is highly costly, because it is an open-ended process that may be maintained for decades.
    • More beneficial new software programs on modern hardware can outperform older software programs that were designed to paint on slow computers with less memory and garage capacity. It is common for changes to go unrecorded which can lead to more conflicts down the road.
    • The expense of maintaining out-of-date software rises with time. A software program can be frequently destroyed by changes. Follow-up modifications are challenging due to the original structure.
    • It can be challenging to find and address problems with systems that lack documentation.
    • It is challenging to recognize and address issues in large complex systems because they are hard to comprehend and alter. It may be necessary to modify software systems in order to meet developed user requirements which can be challenging and time-consuming.
    • A system that must communicate with other software or systems is challenging to maintain because modifications made to one system may have an impact on other systems.
    • Maintaining a tested system is challenging since it is hard to find and address issues without understanding how the system functions in different scenarios.
    • Without workers with the requisite training and expertise, it is challenging to maintain current and accurate systems.
    • Upkeep can be costly particularly for large intricate budgeting and management systems.

    A well-defined maintenance procedure that regulates communication tests and validation versions, among other elements, is necessary to overcome these difficulties. Standard maintenance practices like security, testing, and error loans are also included in a precisely and clearly defined maintenance plan. Additionally, crucial are staff members who possess the knowledge and abilities to maintain their systems current.

    Advantages of software maintenance:

    • Better software quality: Frequent software maintenance guarantees that the application functions accurately, effectively and in accordance with user requirements.
    • Improved Security: As part of routine maintenance safety patches and updates can be applied to make sure your program is safe from potential threats and attacks.
    • Proper Maintenance: Maintaining the most recent software on a regular basis encourages user acceptance and satisfaction and keeps it operating. The software can be used for longer periods of time and costly replacements can be avoided with proper maintenance.
    • Cost savings: By preventing more costly issues before they arise daily software maintenance can lower software owner’s overall expenses.
    • Increased focus on business objectives: Maintaining your software on a regular basis will help you keep up with your company’s evolving needs. Increased productivity and overall business efficiency may result from this.
    • Common Benefits: By enhancing functionality and user experience routine software maintenance gives the program a competitive edge.
    • Regulatory Compliance: By updating your software you can make sure that your application complies with all applicable laws. This is particularly crucial in sectors like national healthcare and finance where strict guidelines must be followed.
    • Improved Teamwork: Regular software maintenance can encourage improved collaboration between various teams including users and developers. This can help you become a better communicator and solve problems.
    • Reduction in downtime: Updates to software can lessen mistakes and system malfunctions. By doing this you will enhance your company’s operations and lessen the possibility of losing clients or sales.
    • Increased scalability: Applications that receive routine maintenance are more adaptable and can accommodate expanding user needs. This is particularly crucial for software with a big user base or expanding business.

    Disadvantages of software maintenance:

    Software maintenance is a part of the Software Development Life Cycle. Its primary goal is to modify and update software application after delivery to correct errors and to improve performance. Software is a model of the real world. When the real world changes, the software require alteration wherever possible.

    Software Maintenance is an inclusive activity that includes error corrections, enhancement of capabilities, deletion of obsolete capabilities, and optimization.

    Numerous Important Elements of Software Maintenance:

    • Bug fixing is the process of locating and resolving software flaws.
    • The process of introducing new features or making improvements to current ones in order to satisfy users changing needs is known as
    • Performance optimization is the process of making software faster, more reliable, and more efficient.
    • The process of modifying software to run on new hardware or software platforms is known as porting and migration.
    • Enhancing the software architecture and design to make it more scalable and maintainable is known as re-engineering.
    • The process of producing, revising, and keeping up with the software documentation which includes design documents, technical specifications, and user manuals.

    Need for Maintenance

    Software Maintenance is needed for:-

    • Correct errors
    • Change in user requirement with time
    • Changing hardware/software requirements
    • To improve system efficiency
    • To optimize the code to run faster
    • To modify the components
    • To reduce any unwanted side effects.

    Thus the maintenance is required to ensure that the system continues to satisfy user requirements.

    Types of Software Maintenance

    Software Maintenance

    1. Corrective Maintenance

    Corrective maintenance aims to correct any remaining errors regardless of where they may cause specifications, design, coding, testing, and documentation, etc.

    2. Adaptive Maintenance

    It contains modifying the software to match changes in the ever-changing environment.

    3. Preventive Maintenance

    It is the process by which we prevent our system from being obsolete. It involves the concept of reengineering & reverse engineering in which an old system with old technology is re-engineered using new technology. This maintenance prevents the system from dying out.

    4. Perfective Maintenance

    It defines improving processing efficiency or performance or restricting the software to enhance changeability. This may contain enhancement of existing system functionality, improvement in computational efficiency, etc.

    Difficulties with software Maintenance:

    The following lists are the different difficulties in software maintenance.

    • Any software programs popular age is considered for a maximum of 10-15 years. Software program renovation is highly costly, because it is an open-ended process that may be maintained for decades.
    • More beneficial new software programs on modern hardware can outperform older software programs that were designed to paint on slow computers with less memory and garage capacity. It is common for changes to go unrecorded which can lead to more conflicts down the road.
    • The expense of maintaining out-of-date software rises with time. A software program can be frequently destroyed by changes. Follow-up modifications are challenging due to the original structure.
    • It can be challenging to find and address problems with systems that lack documentation.
    • It is challenging to recognize and address issues in large complex systems because they are hard to comprehend and alter. It may be necessary to modify software systems in order to meet developed user requirements which can be challenging and time-consuming.
    • A system that must communicate with other software or systems is challenging to maintain because modifications made to one system may have an impact on other systems.
    • Maintaining a tested system is challenging since it is hard to find and address issues without understanding how the system functions in different scenarios.
    • Without workers with the requisite training and expertise, it is challenging to maintain current and accurate systems.
    • Upkeep can be costly particularly for large intricate budgeting and management systems.

    A well-defined maintenance procedure that regulates communication tests and validation versions, among other elements, is necessary to overcome these difficulties. Standard maintenance practices like security, testing, and error loans are also included in a precisely and clearly defined maintenance plan. Additionally, crucial are staff members who possess the knowledge and abilities to maintain their systems current.

    Advantages of software maintenance:

    • Better software quality: Frequent software maintenance guarantees that the application functions accurately, effectively and in accordance with user requirements.
    • Improved Security: As part of routine maintenance safety patches and updates can be applied to make sure your program is safe from potential threats and attacks.
    • Proper Maintenance: Maintaining the most recent software on a regular basis encourages user acceptance and satisfaction and keeps it operating. The software can be used for longer periods of time and costly replacements can be avoided with proper maintenance.
    • Cost savings: By preventing more costly issues before they arise daily software maintenance can lower software owner’s overall expenses.
    • Increased focus on business objectives: Maintaining your software on a regular basis will help you keep up with your company’s evolving needs. Increased productivity and overall business efficiency may result from this.
    • Common Benefits: By enhancing functionality and user experience routine software maintenance gives the program a competitive edge.
    • Regulatory Compliance: By updating your software you can make sure that your application complies with all applicable laws. This is particularly crucial in sectors like national healthcare and finance where strict guidelines must be followed.
    • Improved Teamwork: Regular software maintenance can encourage improved collaboration between various teams including users and developers. This can help you become a better communicator and solve problems.
    • Reduction in downtime: Updates to software can lessen mistakes and system malfunctions. By doing this you will enhance your company’s operations and lessen the possibility of losing clients or sales.
    • Increased scalability: Applications that receive routine maintenance are more adaptable and can accommodate expanding user needs. This is particularly crucial for software with a big user base or expanding business.

    Disadvantages of software maintenance:

    • Cost: Maintaining software requires a significant amount of time and resources which can be costly.
    • Scheduling failure: It can cause availability issues and inconvenience if maintenance interferes with regular software and business hours.
    • Complexity: Complex software systems can be difficult to maintain, update and call for specialized knowledge abilities.
    • Potential for making new errors: Its critical to fully test your program because new features may be introduced as a result of adding new features or fixing issues after maintenance.
    • User Resistance: User satisfaction and acceptance may be jeopardized if a user objects to a software update or modification.
    • Compatibility Problems: Hardware or software incompatibilities brought on by maintenance may cause problems with integration.
    • Absence of Documents: Software maintenance can be made more difficult and time-consuming by inadequate or nonexistent documentation which can result in mistakes or delays. If the cost of updating and maintaining software surpasses the cost of starting from scratch then technical debt can be ascribed to software maintenance over time.
    • Gap Skills: If maintaining a software system calls for a lack of specific knowledge or abilities the business may have to outsource or pay more.
    • Inadequate testing: Errors and possible security flaws may result from partial or nonexistent testing following maintenance. In the end software systems can be developed up to this point so waiting or updating is neither affordable nor practical. A costly and time-consuming system replacement may result from this.
  • Musa-Okumoto Logarithmic Model

    The failure intensity is:

    Musa-Okumoto Logarithmic Model

    Belongs to the mean value function

    Musa-Okumoto Logarithmic Model

    This is the functional form of the Musa-Okumoto logarithmic model:

    Musa-Okumoto Logarithmic Model

    Like Musa’s basic execution time model, the “Logarithmic Poisson Execution Time Model” by Musa and Okumoto is based on failure data measured in execution time.

    Assumptions

    1. At time τ = 0 no failures have been observed, i.e., P(M(0) = 0) = 1.
    2. The failure intensity reduce exponentially with the expected number of failures observed, i.e., Musa-Okumoto Logarithmic Model, where β0 β1is the initial failure intensity and β0-1 is dubbed failure intensity decay parameter.
    3. The number of failures observed by time τ,M(τ), follows a Poisson Process.

    As the derivation of the Musa-Okumoto logarithmic model by the fault exposure ratio has shown, the exponentially decreasing failure intensity implies that the per-fault hazard rate has the shape of a bathtub curve.

  • Goel-Okumoto (GO) Model

    The model developed by Goel and Okumoto in 1979 is based on the following assumptions:

    1. The number of failures experienced by time t follows a Poisson distribution with the mean value function μ (t). This mean value method has the boundary conditions μ(0) = 0 and Limt→∞μ(t) = N < ∞.
    2. The number of software failures that occur in (t, t+Δt] with Δt → 0 is proportional to the expected number of undetected errors, N – μ(t). The constant of proportionality is ∅.
    3. For any finite collection of times t1 < t2 < � � � < tn the number of failures occurring in each of the disjoint intervals (0, t1 ),(t1, t2)… (tn-1,tn) is independent.
    4. Whenever a failure has occurred, the fault that caused it is removed instantaneously and without introducing any new fault into the software.

    Since each fault is perfectly repaired after it has caused a failure, the number of inherent faults in the software at the starting of testing is equal to the number of failures that will have appeared after an infinite amount of testing. According to assumption 1, M (∞) follows a Poisson distribution with expected value N. Therefore, N is the expected number of initial software faults as compared to the fixed but unknown actual number of initial software faults μ0 in the Jelinski Moranda model.

    Assumption 2 states that the failure intensity at time t is given by

                    dμ(t)/dt = ∅[N – μ(t)]

    Just like in the Jelinski-Moranda model, the failure intensity is the product of the constant hazard rate of a single fault and the number of expected faults remaining in the software. However, N itself is an expected value.

    Musa’s Basic Execution time Model

    Musa’s basic execution time model is based on an execution time model, i.e., the time taken during modeling is the actual CPU execution time of the software being modeled. This model is easy to understand and apply, and its predictive value has been generally found to be good. The model target on failure intensity while modeling reliability.

    It assumes that the failure intensity reduces with time, that is, as (execution) time increases, the failure intensity decreases. This assumption is usually true as the following is assumed about the software testing activity, during which data is being collected: during testing if a failure is observed, the fault that caused that failure is detected, and the fault is deleted.

    Even if a specific fault removal action may be unsuccessful, overall failures lead to a reduction of faults in the software. Consequently, the failure intensity decreases. Most other models make a similar assumption, which is consistent with actual observations.

    In the basic model, it is consider that each failure causes the same amount of decrement in the failure intensity. That is, the failure intensity reduces with a constant rate with the number of failures. In the more sophisticated Musa’s logarithmic model, the reduction is not assumed to be linear but logarithmic.

    Musa’s basic execution time model established in 1975 was the first one to explicitly require that the time measurements be in actual CPU time utilized in executing the application under test (named “execution time” t in short).

    Although it was not initially formulated like that the model can be classified by three characteristics:

    • The number of failures that can be experienced in infinite time is finite.
    • The distribution of the number of failures noticed by time t is of Poisson type.
    • The functional method of the failure intensity in terms of time is exponential.

    It shares these methods with the Goel-Okumoto model, and the two models are mathematically equivalent. In addition to the use of execution time, a difference lies in the interpretation of the constant per-fault hazard rate ∅. Musa split ∅ up in two constant methods, the linear execution frequency f, and the so-called fault exposure ratio K:

                    dμ(t)/ dt= f K [N – μ(t )]

    f can be calculated as the average object instruction execution rate of the computer, r, divided by the number of source code instructions of the application under test, ls, times the average number of object instructions per source code instruction, Qx : f = r / ls Qx.

    The fault exposure ratio associate the fault velocity f [N – μ(t)], the speed with which defective parts of the code would be passed if all the statements were consecutively executed, to the failure intensity experienced. Therefore, it can be explained as the average number of failures occurring per fault remaining in the code during one linear execution of the program.

  • Basic Execution Time Model

    This model was established by J.D. Musa in 1979, and it is based on execution time. The basic execution model is the most popular and generally used reliability growth model, mainly because:

    • It is practical, simple, and easy to understand.
    • Its parameters clearly relate to the physical world.
    • It can be used for accurate reliability prediction.

    The basic execution model determines failure behavior initially using execution time. Execution time may later be converted in calendar time.

    The failure behavior is a nonhomogeneous Poisson process, which means the associated probability distribution is a Poisson

    process whose characteristics vary in time.

    It is equivalent to the M-O logarithmic Poisson execution time model, with different mean value function.

    The mean value function, in this case, is based on an exponential distribution.

    Variables involved in the Basic Execution Model:

    Failure intensity (λ): number of failures per time unit.

    Execution time (τ): time since the program is running.

    Mean failures experienced (μ): mean failures experienced in a time interval.

    In the basic execution model, the mean failures experienced μ is expressed in terms of the execution time (τ) as

    Software Reliability Models

    Where

    0: stands for the initial failure intensity at the start of the execution.

    -v0: stands for the total number of failures occurring over an infinite time period; it corresponds to the expected number of failures to be observed eventually.

    The failure intensity expressed as a function of the execution time is given by

    Software Reliability Models

    It is based on the above formula. The failure intensity λ is expressed in terms of μ as:

    Software Reliability Models

    Where

    λ0: Initial

    v0: Number of failures experienced, if a program is executed for an infinite time period.

    μ: Average or expected number of failures experienced at a given period of time.

    τ: Execution time.

    Software Reliability Models
    Software Reliability Models
    Software Reliability Models

    For a derivation of this relationship, equation 1 can be written as:

    Software Reliability Models

    The above equation can be solved for λ(τ) and result in:

    Software Reliability Models

    The failure intensity as a function of execution time is shown in fig:

    Software Reliability Models

    Based on the above expressions, given some failure intensity objective, one can compute the expected number of failures ∆λ and the additional execution time ∆τ required to reach that objective.

    Software Reliability Models
    Software Reliability Models

    Where

    λ0: Initial failure Intensity

    λP: Present failure Intensity

    λF: Failure of Intensity objective

    ∆μ: Expected number of additional failures to be experienced to reach failure intensity objectives.

    Software Reliability Models

    This can be derived in mathematical form:

    Software Reliability Models

    Example: Assume that a program will experience 200 failures in infinite time. It has now experienced 100. The initial failure intensity was 20Software Reliability Modelshr. Determine the current failure intensity.

    1. Find the decrement of failure intensity per failure.
    2. Calculate the failures experienced and failure intensity after 20 and 100 CPU hrs. of execution.
    3. Compute addition failures and additional execution time required to reach the failure intensity objective of 5 failures/CPU hr.

    Use the basic execution time model for the above-mentioned calculations.

    Solution:

    Software Reliability Models

    (1)Current Failure Intensity:

    Software Reliability Models

    (2)Decrement of failure Intensity per failure can be calculated as:

    Software Reliability Models

    (3)(a) Failures experienced & Failure Intensity after 20 CPU hr.

    Software Reliability Models

    (b)Failures experienced & Failure Intensity after 100 CPU hr.

    Software Reliability Models

    4. Additional failures (∆μ) required to reach the failure intensity objectives of 5hr.

    Software Reliability Models

    The additional execution time required to reach the failure intensity objectives of 5hr.

    Software Reliability Models
  • Jelinski and Moranda Model

    The Jelinski-Moranda (JM) model, which is also a Markov process model, has strongly affected many later models which are in fact modifications of this simple model.

    Characteristics of JM Model

    Following are the characteristics of JM-Model:

    1. It is a Binomial type model
    2. It is certainly the earliest and certainly one of the most well-known black-box models.
    3. J-M model always yields an over-optimistic reliability prediction.
    4. JM Model follows a prefect debugging step, i.e., the detected fault is removed with certainty simple model.
    5. The constant software failure rate of the J?M model at the i^th failure interval is given by:

            λ(ti) = ϕ [N-(i-1)],     i=1, 2… N ………equation 1

    Where

    ϕ=a constant of proportionality indicating the failure rate provided by each fault

    N=the initial number of errors in the software

    ti=the time between (i-1)th and (i)th failure.

    The mean value and the failure intensity methods for this model which belongs to the binominal type can be obtained by multiplying the inherent number of faults by the cumulative failure and probability density functions (pdf) respectively:

                μ(ti )=N(1-e-ϕti)…………..equation 2

    And

                €(ti)=Nϕe-ϕti………….equation 3

    Those characteristics plus four other characteristics of the J-M model are summarized in table:

    Measures of Reliability nameMeasures of Reliability formula
    Probability density functionf(ti)= ϕ[N-(i-1]e-ϕ[N-(i-1)]ti
    Software Reliability functionR(ti)= e-ϕ[N-(i-1)]ti
    Failure rate functionλ(ti)= ϕ[N-(i-1)]
    Mean time to failure functionJelinski and Moranda Model
    Mean value functionµ(ti )=N(1-e-ϕti)
    Failure Intensity function€(ti )=Nϕe-ϕti
    Medianm={ϕ[N-(i-1)]} -1 In2
    Cumulative Distribution functionf(ti)=1-e-ϕ[N-(i-1)]ti

    Assumptions

    The assumptions made in the J-M model contains the following:

    1. The number of initial software errors is unknown but fixed and constant.
    2. Each error in the software is independent and equally likely to cause a failure during a test.
    3. Time intervals between occurrences of failure are separate, exponentially distributed random variables.
    4. The software failure rate remains fixed over the ranges among fault occurrences.
    5. The failure rate is corresponding to the number of faults that remain in the software.
    6. A detected error is removed immediately, and no new mistakes are introduced during the removal of the detected defect.
    7. Whenever a failure appears, the corresponding fault is reduced with certainty.

    Variations in JM Model

    JM model was the first prominent software reliability model. Several researchers showed interest and modify this model, using different parameters such as failure rate, perfect debugging, imperfect debugging, number of failures, etc. now, we will discuss different existing variations of this model.

    Jelinski and Moranda Model

    1. Lipow Modified Version of Jelinski-Moranda Geometric Model

    It allows multiple bugs removal in a time interval. The program failure rate becomes

                λ(ti)=DKni-1

    Where ni-1 is the cumulative number of errors found up to the (i-1)st time interval.

    2. Sukert Modified Schick-Wolverton Model

    Sukert modifies the S-W model to allow more than one failure at each time interval. The program failure rate becomes

    Jelinski and Moranda Model

    Where ni-1 is the cumulative number of failures at the (i-1)th failure interval.

    3. Schick Wolverton Model

    The Schick and Wolverton (S-W) model are similar to the J-M model, except it further consider that the failure rate at the ith time interval increases with time since the last debugging.

    Assumptions

    • Errors occur by accident.
    • The bug detection rate in the defined time intervals is constant.
    • Errors are independent of each other.
    • No new bugs are developed.
    • Bugs are corrected after they have been detected.

    In the model, the program failure rate method is:

                λ (ti)= ϕ[N-(i-1)] ti

    Where ϕ is a proportional constant, N is the initial number of bugs in the program, and ti is the test time since the (i-1)st failure.

    4. GO-Imperfect Debugging Model

    Goel and Okumoto expand the J-M model by assuming that an error is removed with probability p whenever a failure appears. The program failure rate at the ith failure interval is

                λ (ti)= ϕ[N-p(i-1)]
                R(ti)=e-ϕ[N-p(i-1)]-ti)

    5. Jelinski-Moranda Geometric Model

    This model considers that the program failure rate function is initially a constant D and reduce geometrically at failure time. The program failure rate and reliability method of time between failures at the ith failure interval are

                λ (ti)=DKi-1
                R(ti)=e-DKi-1ti)

    Where k is Parameter of geometric function, 0<k<1

    6. Little-Verrall Bayesian Model

    This model considers that times between failures are independent exponential random variables with a parameter € i=1, 2 ….n which itself has parameters Ψ(i) and α reflecting programmer quality and function difficulty having a prior gamma distribution.

    Jelinski and Moranda Model

    Where B represents the fault reduction factor

    7. Shanthikumar General Markov Model

    This model considers that the failure intensity functions as the number of failures removed are as the given below

                λ SG(n, t) = Ψ(t) (N0-n)

    Where Ψ (t) is proportionality constant.

    8. An Error Detection Model for Application during Software Development

    The primary feature of this new model is that the variable (growing) size of a developing program is accommodated so that the quality of a program can be predicted by analyzing a basic segment.

    Assumptions

    This model has the following assumptions along with the JM model assumptions:

    1. Any tested initial portion of the program describes the entire program for the number and nature of its incipient errors.
    2. The detect-ability of a mistake is unaffected by the “‘dilution” incurred when the initially tested method is augmented by new code.
    3. The number of lines of code which exists at any time is known.
    4. The growth function and the bug detection process are independent.

    9. The Langberg Singpurwalla Model

    This model shows how several models used to define the reliability of computer software can be comprehensively viewed by adopting a Bayesian point of view.

    This model provides a different motivation for a commonly used model using notions from shock models.

    10. Jewell Bayesian Software Reliability Model

    Jewell extended a result by Langberg and Singpurwalla (1985) and made an expansion of the Jelinski-Moranda model.

    Assumptions

    1. The testing protocol is authorized to run for a fixed length of time-possibly, but not certainly, coinciding with a failure epoch.
    2. The distribution of the unknown number of shortage is generalized from the one-parameter Poisson distribution by considering that the parameter is itself a random quantity with a Beta prior distribution.
    3. Although the estimation of the posterior distributions of the parameters leads to complex expressions, we show that the calculation of the predictive distribution for undetected bugs is straightforward.
    4. Although it is now identified that the MLE’s for reliability, growth can be volatile, we show that, if a point estimator is needed, the predictive model is easily calculated without obtaining the full distribution first.

    11. Quantum Modification to the JM Model

    This model replaces the JM Model assumption, each error has the same contribution to the unreliability of software, with the new assumption that different types of errors may have different effects on the failure rate of the software.

    Failure Rate:

    Jelinski and Moranda Model

    Where

    Q = initial number of failure quantum units inherent in a software

    Ψ = the failure rate corresponding to a single failure quantum unit

    wj= the number of failure-quantum units of the ith fault, i.e., the size of the ith failure-quantum

    12. Optimal Software Released Based on Markovian Software Reliability Model

    In this model, a software fault detection method is explained by a Markovian Birth process with absorption. This paper amended the optimal software release policies by taking account of a waste of a software testing time.

    13. A Modification to the Jelinski-Moranda Software Reliability Growth Model Based on Cloud Model Theory

    A new unknown parameter θ is contained in the JM model parameters estimation such that θɛ [θL, θ]. The confidence level is the probability value (1-α) related to a confidence interval. In general, if the confidence interval for a software reliability index θ is achieved, we can estimate the mathematical characteristics of virtual cloud C(Ex, En, He), which can be switched to system qualitative evaluation by X condition cloud generator.

    14. Modified JM Model with imperfect Debugging Phenomenon

    The modified JM Model extends the J-M model by relaxing the assumptions of complete debugging process and types of incomplete removal:

    1. The fault is not deleted successfully while no new faults are introduced
    2. The fault is not deleted successfully while new faults are created due to incorrect diagnoses.

    Assumptions

    The assumptions made in the Modified J-M model contain the following:

    • The number of initial software errors is unknown but fixed and constant.
    • Each error in the software is independent and equally feasible to cause a failure during a test.
    • Time intervals between occurrences of failure are independent, exponentially distributed random variables.
    • The software failure rate remains fixed over the intervals between fault occurrences.
    • The failure rate is proportional to the number of errors that remain in the software.
    • Whenever a failure occurs, the detected error is removed with probability p, the detected fault is not entirely removed with probability q, and the new fault is generated with probability r. So it is evident that p+q+r=1and q≥r.

    List of various characteristics underlying the Modified JM Model with imperfect Debugging Phenomenon

    Measures of reliability nameMeasures of reliability formula
    Software failure rateλ(ti=ϕ[N-(i-1)(p-r)]
    Failure Density FunctionF(ti= ϕ[N-(i-1)(p-r)]exp(-ϕ[N-(i-1)(p-r)] ti)
    Distribution FunctionFi(ti)=1-exp(-ϕ[N-(i-1)(p-r)] ti)
    Reliability function at the ith failure intervalR(ti)=1-Fi (ti )=exp(-ϕ[N-(i-1)(p-r)] ti)
    Mean time to failure function1/ ϕ[N-(i-1)(p-r)]
  • Software Reliability Models

    A software reliability model indicates the form of a random process that defines the behavior of software failures to time.

    Software reliability models have appeared as people try to understand the features of how and why software fails, and attempt to quantify software reliability.

    Over 200 models have been established since the early 1970s, but how to quantify software reliability remains mostly unsolved.

    There is no individual model that can be used in all situations. No model is complete or even representative.

    Most software models contain the following parts:

    • Assumptions
    • Factors

    A mathematical function that includes the reliability with the elements. The mathematical function is generally higher-order exponential or logarithmic.

    Software Reliability Modeling Techniques

    Software Reliability Models

    Both kinds of modeling methods are based on observing and accumulating failure data and analyzing with statistical inference.

    Differentiate between software reliability prediction models and software reliability estimation models

    BasicsPrediction ModelsEstimation Models
    Data ReferenceUses historical informationUses data from the current software development effort.
    When used in development cycleUsually made before development or test phases; can be used as early as concept phase.Usually made later in the life cycle (after some data have been collected); not typically used in concept or development phases.
    Time FramePredict reliability at some future time.Estimate reliability at either present or some next time.

    Reliability Models

    A reliability growth model is a numerical model of software reliability, which predicts how software reliability should improve over time as errors are discovered and repaired. These models help the manager in deciding how much efforts should be devoted to testing. The objective of the project manager is to test and debug the system until the required level of reliability is reached.

    Following are the Software Reliability Models are:

    Software Reliability Models
  • Software Fault Tolerance

    Software fault tolerance is the ability for software to detect and recover from a fault that is happening or has already happened in either the software or hardware in the system in which the software is running to provide service by the specification.

    Software fault tolerance is a necessary component to construct the next generation of highly available and reliable computing systems from embedded systems to data warehouse systems.

    To adequately understand software fault tolerance it is important to understand the nature of the problem that software fault tolerance is supposed to solve.

    Software faults are all design faults. Software manufacturing, the reproduction of software, is considered to be perfect. The source of the problem being solely designed faults is very different than almost any other system in which fault tolerance is the desired property.

    Software Fault Tolerance Techniques

    Software Fault Tolerance

    1. Recovery Block

    The recovery block method is a simple technique developed by Randel. The recovery block operates with an adjudicator, which confirms the results of various implementations of the same algorithm. In a system with recovery blocks, the system view is broken down into fault recoverable blocks.

    The entire system is constructed of these fault-tolerant blocks. Each block contains at least a primary, secondary, and exceptional case code along with an adjudicator. The adjudicator is the component, which determines the correctness of the various blocks to try.

    The adjudicator should be kept somewhat simple to maintain execution speed and aide in correctness. Upon first entering a unit, the adjudicator first executes the primary alternate. (There may be N alternates in a unit which the adjudicator may try.) If the adjudicator determines that the fundamental block failed, it then tries to roll back the state of the system and tries the secondary alternate.

    If the adjudicator does not accept the results of any of the alternates, it then invokes the exception handler, which then indicates the fact that the software could not perform the requested operation.

    The recovery block technique increases the pressure on the specification to be specific enough to create various multiple alternatives that are functionally the same. This problem is further discussed in the context of the N-version software method.

    2. N-Version Software

    The N-version software methods attempt to parallel the traditional hardware fault tolerance concept of N-way redundant hardware. In an N-version software system, every module is done with up to N different methods. Each variant accomplishes the same function, but hopefully in a various way. Each version then submits its answer to voter or decider, which decides the correct answer, and returns that as the result of the module.

    This system can hopefully overcome the design faults present in most software by relying upon the design diversity concept. An essential distinction in N-version software is the fact that the system could include multiple types of hardware using numerous versions of the software.

    N-version software can only be successful and successfully tolerate faults if the required design diversity is met. The dependence on appropriate specifications in N-version software, (and recovery blocks,) cannot be stressed enough.

    3. N-Version Software and Recovery Blocks

    The differences between the recovery block technique and the N-version technique are not too numerous, but they are essential. In traditional recovery blocks, each alternative would be executed serially until an acceptable solution is found as determined by the adjudicator. The recovery block method has been extended to contain concurrent execution of the various alternatives.

    The N-version techniques have always been designed to be implemented using N-way hardware concurrently. In a serial retry system, the cost in time of trying multiple methods may be too expensive, especially for a real-time system. Conversely, concurrent systems need the expense of N-way hardware and a communications network to connect them.

    The recovery block technique requires that each module build a specific adjudicator; in the N-version method, a single decider may be used. The recovery block technique, assuming that the programmer can create a sufficiently simple adjudicator, will create a system, which is challenging to enter into an incorrect state.


  • Reliability Metrics

    Reliability metrics are used to quantitatively expressed the reliability of the software product. The option of which metric is to be used depends upon the type of system to which it applies & the requirements of the application domain.

    Some reliability metrics which can be used to quantify the reliability of the software product are as follows:

    Reliability Metrics

    1. Mean Time to Failure (MTTF)

    MTTF is described as the time interval between the two successive failures. An MTTF of 200 mean that one failure can be expected each 200-time units. The time units are entirely dependent on the system & it can even be stated in the number of transactions. MTTF is consistent for systems with large transactions.

    For example, It is suitable for computer-aided design systems where a designer will work on a design for several hours as well as for Word-processor systems.

    To measure MTTF, we can evidence the failure data for n failures. Let the failures appear at the time instants t1,t2…..tn.

    MTTF can be calculated as

    Reliability Metrics

    2. Mean Time to Repair (MTTR)

    Once failure occurs, some-time is required to fix the error. MTTR measures the average time it takes to track the errors causing the failure and to fix them.

    3. Mean Time Between Failure (MTBR)

    We can merge MTTF & MTTR metrics to get the MTBF metric.

                      MTBF = MTTF + MTTR

    Thus, an MTBF of 300 denoted that once the failure appears, the next failure is expected to appear only after 300 hours. In this method, the time measurements are real-time & not the execution time as in MTTF.

    4. Rate of occurrence of failure (ROCOF)

    It is the number of failures appearing in a unit time interval. The number of unexpected events over a specific time of operation. ROCOF is the frequency of occurrence with which unexpected role is likely to appear. A ROCOF of 0.02 mean that two failures are likely to occur in each 100 operational time unit steps. It is also called the failure intensity metric.

    5. Probability of Failure on Demand (POFOD)

    POFOD is described as the probability that the system will fail when a service is requested. It is the number of system deficiency given several systems inputs.

    POFOD is the possibility that the system will fail when a service request is made.

    POFOD of 0.1 means that one out of ten service requests may fail.POFOD is an essential measure for safety-critical systems. POFOD is relevant for protection systems where services are demanded occasionally.

    6. Availability (AVAIL)

    Availability is the probability that the system is applicable for use at a given time. It takes into account the repair time & the restart time for the system. An availability of 0.995 means that in every 1000 time units, the system is feasible to be available for 995 of these. The percentage of time that a system is applicable for use, taking into account planned and unplanned downtime. If a system is down an average of four hours out of 100 hours of operation, its AVAIL is 96%.

    Software Metrics for Reliability

    The Metrics are used to improve the reliability of the system by identifying the areas of requirements.

    Different Types of Software Metrics are:-

    Reliability Metrics

    Requirements Reliability Metrics

    Requirements denote what features the software must include. It specifies the functionality that must be contained in the software. The requirements must be written such that is no misconception between the developer & the client. The requirements must include valid structure to avoid the loss of valuable data.

    The requirements should be thorough and in a detailed manner so that it is simple for the design stage. The requirements should not include inadequate data. Requirement Reliability metrics calculates the above-said quality factors of the required document.

    Design and Code Reliability Metrics

    The quality methods that exists in design and coding plan are complexity, size, and modularity. Complex modules are tough to understand & there is a high probability of occurring bugs. The reliability will reduce if modules have a combination of high complexity and large size or high complexity and small size. These metrics are also available to object-oriented code, but in this, additional metrics are required to evaluate the quality.

    Testing Reliability Metrics

    These metrics use two methods to calculate reliability.

    First, it provides that the system is equipped with the tasks that are specified in the requirements. Because of this, the bugs due to the lack of functionality reduces.

    The second method is calculating the code, finding the bugs & fixing them. To ensure that the system includes the functionality specified, test plans are written that include multiple test cases. Each test method is based on one system state and tests some tasks that are based on an associated set of requirements. The goals of an effective verification program is to ensure that each elements is tested, the implication being that if the system passes the test, the requirement?s functionality is contained in the delivered system.

  • Software Reliability Measurement Techniques

    Reliability metrics are used to quantitatively expressed the reliability of the software product. The option of which parameter is to be used depends upon the type of system to which it applies & the requirements of the application domain.

    Measuring software reliability is a severe problem because we don’t have a good understanding of the nature of software. It is difficult to find a suitable method to measure software reliability and most of the aspects connected to software reliability. Even the software estimates have no uniform definition. If we cannot measure the reliability directly, something can be measured that reflects the features related to reliability.

    The current methods of software reliability measurement can be divided into four categories:

    Software Reliability Measurement Techniques

    1. Product Metrics

    Product metrics are those which are used to build the artifacts, i.e., requirement specification documents, system design documents, etc. These metrics help in the assessment if the product is right sufficient through records on attributes like usability, reliability, maintainability & portability. In these measurements are taken from the actual body of the source code.

    1. Software size is thought to be reflective of complexity, development effort, and reliability. Lines of Code (LOC), or LOC in thousands (KLOC), is an initial intuitive approach to measuring software size. The basis of LOC is that program length can be used as a predictor of program characteristics such as effort &ease of maintenance. It is a measure of the functional complexity of the program and is independent of the programming language.
    2. Function point metric is a technique to measure the functionality of proposed software development based on the count of inputs, outputs, master files, inquires, and interfaces.
    3. Test coverage metric size fault and reliability by performing tests on software products, assuming that software reliability is a function of the portion of software that is successfully verified or tested.
    4. Complexity is directly linked to software reliability, so representing complexity is essential. Complexity-oriented metrics is a way of determining the complexity of a program’s control structure by simplifying the code into a graphical representation. The representative metric is McCabe’s Complexity Metric.
    5. Quality metrics measure the quality at various steps of software product development. An vital quality metric is Defect Removal Efficiency (DRE). DRE provides a measure of quality because of different quality assurance and control activities applied throughout the development process.

    2. Project Management Metrics

    Project metrics define project characteristics and execution. If there is proper management of the project by the programmer, then this helps us to achieve better products. A relationship exists between the development process and the ability to complete projects on time and within the desired quality objectives. Cost increase when developers use inadequate methods. Higher reliability can be achieved by using a better development process, risk management process, configuration management process.

    These metrics are:

    • Number of software developers
    • Staffing pattern over the life-cycle of the software
    • Cost and schedule
    • Productivity

    3. Process Metrics

    Process metrics quantify useful attributes of the software development process & its environment. They tell if the process is functioning optimally as they report on characteristics like cycle time & rework time. The goal of process metric is to do the right job on the first time through the process. The quality of the product is a direct function of the process. So process metrics can be used to estimate, monitor, and improve the reliability and quality of software. Process metrics describe the effectiveness and quality of the processes that produce the software product.

    Examples are:

    • The effort required in the process
    • Time to produce the product
    • Effectiveness of defect removal during development
    • Number of defects found during testing
    • Maturity of the process

    4. Fault and Failure Metrics

    A fault is a defect in a program which appears when the programmer makes an error and causes failure when executed under particular conditions. These metrics are used to determine the failure-free execution software.

    To achieve this objective, a number of faults found during testing and the failures or other problems which are reported by the user after delivery are collected, summarized, and analyzed. Failure metrics are based upon customer information regarding faults found after release of the software. The failure data collected is therefore used to calculate failure density, Mean Time between Failures (MTBF), or other parameters to measure or predict software reliability.

  • Software Failure Mechanisms

    The software failure can be classified as:

    Transient failure: These failures only occur with specific inputs.

    Permanent failure: This failure appears on all inputs.

    Recoverable failure: System can recover without operator help.

    Unrecoverable failure: System can recover with operator help only.

    Non-corruption failure: Failure does not corrupt system state or data.

    Corrupting failure: It damages the system state or data.

    Software failures may be due to bugs, ambiguities, oversights or misinterpretation of the specification that the software is supposed to satisfy, carelessness or incompetence in writing code, inadequate testing, incorrect or unexpected usage of the software or other unforeseen problems.

    Hardware vs. Software Reliability

    Hardware ReliabilitySoftware Reliability
    Hardware faults are mostly physical faults.Software faults are design faults, which are tough to visualize, classify, detect, and correct.
    Hardware components generally fail due to wear and tear.Software component fails due to bugs.
    In hardware, design faults may also exist, but physical faults generally dominate.In software, we can simply find a strict corresponding counterpart for “manufacturing” as the hardware manufacturing process, if the simple action of uploading software modules into place does not count. Therefore, the quality of the software will not change once it is uploaded into the storage and start running
    Hardware exhibits the failure features shown in the following figure:
    Software Failure Mechanisms
    It is called the bathtub curve. Period A, B, and C stand for burn-in phase, useful life phase, and end-of-life phase respectively.
    Software reliability does not show the same features similar as hardware. A possible curve is shown in the following figure:
    Software Failure Mechanisms
    If we projected software reliability on the same axes.

    There are two significant differences between hardware and software curves are:

    One difference is that in the last stage, the software does not have an increasing failure rate as hardware does. In this phase, the software is approaching obsolescence; there are no motivations for any upgrades or changes to the software. Therefore, the failure rate will not change.

    The second difference is that in the useful-life phase, the software will experience a radical increase in failure rate each time an upgrade is made. The failure rate levels off gradually, partly because of the defects create and fixed after the updates.

    The upgrades in above figure signify feature upgrades, not upgrades for reliability. For feature upgrades, the complexity of software is possible to be increased, since the functionality of the software is enhanced. Even error fixes may be a reason for more software failures if the bug fix induces other defects into the software. For reliability upgrades, it is likely to incur a drop in software failure rate, if the objective of the upgrade is enhancing software reliability, such as a redesign or reimplementation of some modules using better engineering approaches, such as clean-room method.

    A partial list of the distinct features of software compared to hardware is listed below:

    Software Failure Mechanisms

    Failure cause: Software defects are primarily designed defects.

    Wear-out: Software does not have an energy-related wear-out phase. Bugs can arise without warning.

    Repairable system: Periodic restarts can help fix software queries.

    Time dependency and life cycle: Software reliability is not a purpose of operational time.

    Environmental factors: Do not affect Software reliability, except it may affect program inputs.

    Reliability prediction: Software reliability cannot be predicted from any physical basis since it depends entirely on human factors in design.

    Redundancy: It cannot improve Software reliability if identical software elements are used.

    Interfaces: Software interfaces are merely conceptual other than visual.

    Failure rate motivators: It is generally not predictable from analyses of separate statements.

    Built with standard components: Well-understood and extensively tested standard element will help improve maintainability and reliability. But in the software industry, we have not observed this trend. Code reuse has been around for some time but to a minimal extent. There are no standard elements for software, except for some standardized logic structures.