Author: saqibkhan

  • Project Scheduling

    Project-task scheduling is a significant project planning activity. It comprises deciding which functions would be taken up when. To schedule the project plan, a software project manager wants to do the following:

    1. Identify all the functions required to complete the project.
    2. Break down large functions into small activities.
    3. Determine the dependency among various activities.
    4. Establish the most likely size for the time duration required to complete the activities.
    5. Allocate resources to activities.
    6. Plan the beginning and ending dates for different activities.
    7. Determine the critical path. A critical way is the group of activities that decide the duration of the project.

    The first method in scheduling a software plan involves identifying all the functions required to complete the project. A good judgment of the intricacies of the project and the development process helps the supervisor to identify the critical role of the project effectively. Next, the large functions are broken down into a valid set of small activities which would be assigned to various engineers. The work breakdown structure formalism supports the manager to breakdown the function systematically after the project manager has broken down the purpose and constructs the work breakdown structure; he has to find the dependency among the activities. Dependency among the various activities determines the order in which the various events would be carried out. If an activity A necessary the results of another activity B, then activity A must be scheduled after activity B. In general, the function dependencies describe a partial ordering among functions, i.e., each service may precede a subset of other functions, but some functions might not have any precedence ordering describe between them (called concurrent function). The dependency among the activities is defined in the pattern of an activity network.

    Once the activity network representation has been processed out, resources are allocated to every activity. Resource allocation is usually done using a Gantt chart. After resource allocation is completed, a PERT chart representation is developed. The PERT chart representation is useful for program monitoring and control. For task scheduling, the project plan needs to decompose the project functions into a set of activities. The time frame when every activity is to be performed is to be determined. The end of every action is called a milestone. The project manager tracks the function of a project by audit the timely completion of the milestones. If he examines that the milestones start getting delayed, then he has to handle the activities carefully so that the complete deadline can still be met.

  • Risk Management Activities

    Risk management consists of three main activities, as shown in fig:

    Risk Management Activities

    Risk Assessment

    The objective of risk assessment is to division the risks in the condition of their loss, causing potential. For risk assessment, first, every risk should be rated in two methods:

    • The possibility of a risk coming true (denoted as r).
    • The consequence of the issues relates to that risk (denoted as s).

    Based on these two methods, the priority of each risk can be estimated:

                        p = r * s

    Where p is the priority with which the risk must be controlled, r is the probability of the risk becoming true, and s is the severity of loss caused due to the risk becoming true. If all identified risks are set up, then the most likely and damaging risks can be controlled first, and more comprehensive risk abatement methods can be designed for these risks.

    1. Risk Identification: The project organizer needs to anticipate the risk in the project as early as possible so that the impact of risk can be reduced by making effective risk management planning.

    A project can be of use by a large variety of risk. To identify the significant risk, this might affect a project. It is necessary to categories into the different risk of classes.

    There are different types of risks which can affect a software project:

    1. Technology risks: Risks that assume from the software or hardware technologies that are used to develop the system.
    2. People risks: Risks that are connected with the person in the development team.
    3. Organizational risks: Risks that assume from the organizational environment where the software is being developed.
    4. Tools risks: Risks that assume from the software tools and other support software used to create the system.
    5. Requirement risks: Risks that assume from the changes to the customer requirement and the process of managing the requirements change.
    6. Estimation risks: Risks that assume from the management estimates of the resources required to build the system

    2. Risk Analysis: During the risk analysis process, you have to consider every identified risk and make a perception of the probability and seriousness of that risk.

    There is no simple way to do this. You have to rely on your perception and experience of previous projects and the problems that arise in them.

    It is not possible to make an exact, the numerical estimate of the probability and seriousness of each risk. Instead, you should authorize the risk to one of several bands:

    1. The probability of the risk might be determined as very low (0-10%), low (10-25%), moderate (25-50%), high (50-75%) or very high (+75%).
    2. The effect of the risk might be determined as catastrophic (threaten the survival of the plan), serious (would cause significant delays), tolerable (delays are within allowed contingency), or insignificant.

    Risk Control

    It is the process of managing risks to achieve desired outcomes. After all, the identified risks of a plan are determined; the project must be made to include the most harmful and the most likely risks. Different risks need different containment methods. In fact, most risks need ingenuity on the part of the project manager in tackling the risk.

    There are three main methods to plan for risk management:

    1. Avoid the risk: This may take several ways such as discussing with the client to change the requirements to decrease the scope of the work, giving incentives to the engineers to avoid the risk of human resources turnover, etc.
    2. Transfer the risk: This method involves getting the risky element developed by a third party, buying insurance cover, etc.
    3. Risk reduction: This means planning method to include the loss due to risk. For instance, if there is a risk that some key personnel might leave, new recruitment can be planned.

    Risk Leverage: To choose between the various methods of handling risk, the project plan must consider the amount of controlling the risk and the corresponding reduction of risk. For this, the risk leverage of the various risks can be estimated.

    Risk leverage is the variation in risk exposure divided by the amount of reducing the risk.

    Risk leverage = (risk exposure before reduction – risk exposure after reduction) / (cost of reduction)

    1. Risk planning: The risk planning method considers each of the key risks that have been identified and develop ways to maintain these risks.

    For each of the risks, you have to think of the behavior that you may take to minimize the disruption to the plan if the issue identified in the risk occurs.

    You also should think about data that you might need to collect while monitoring the plan so that issues can be anticipated.

    Again, there is no easy process that can be followed for contingency planning. It rely on the judgment and experience of the project manager.

  • What is Risk Management? – Software Engineering 

    Risk management in software engineering is defined as the process of identifying, analysing, ranking, and treating risks that may threaten the success of a software engineering project. It is a set of actions and strategies that are taken to reduce the possibility of risks and their impacts by the achievement of the laid down project goals and objectives. The main aim of risk management is to contain the amount of risk and improve the quality of decision-making because it considers threats and turns them into strategic concerns before they become critical problems.

    Importance of Risk Management

    • Project Success: Risk management helps recognise risks and deal with them before they become serious, which may help keep projects on schedule, on budget, and to the desired standard.
    • Resource Optimization: Managing risks enables the successful use of some resources, scrap avoidance, and direction of attention to critical project aspects.
    • Stakeholder Confidence: Advance risk management can be effective. It can increase the stakeholders’ confidence, as it emphasises delivering a reliable product.
    • Adaptability: Risk management helps teams better prepare for any changes in scenarios and inconvenient situations that may arise, keeping the project on track and steady ground.
    • Cost Control: Identification and management of risks to avoid negative consequences that could lead to overtime and thus cost more than what the project was designed to spend.

    Overview of Risk Management Process

    The risk management process in software engineering typically involves the following steps.

    1. Risk Identification: The first is identifying risks that might harm the project. This involves using brainstorming, checklists, historical data analysis, and SWOT analysis to identify risks.
    2. Risk Analysis and Evaluation: After identifying risks, they are assessed to understand their potential consequences and the chances of their occurrence. This step can be either qualitative, where assessment tools such as the Probability and Impact Matrix are applied or quantitative, where Monte Carlo Simulation and Decision Tree Analysis are used.
    3. Risk Prioritization: The results from the risk evaluation are then ranked against the likelihood of occurrence and their impact on the business. To summarise, high-probability risks represent crucial risks likely to occur and deserve prompt attention.
    4. Risk Response Planning: In this step, the management strategy entails taking measures to contain the above risks.
      • Avoidance: Changing the project schedule in such a way that it will minimise the risk.
      • Mitigation: Measures that can be taken to minimise the prospect or magnitude of the threat.
      • Transfer: This creates a twofold risk division between risk avoidance and risk shifting, typical of insurance and outsourcing.
      • Acceptance: Recognising that an adverse event may happen and drawing up a plan to mitigate if it does occur.
    5. Risk Monitoring and Control: Risk management must be monitored across the project life cycle since risk is never far from the picture. This means carrying out risk analysis periodically, risk review, and risk audits to achieve an updated risk management plan to cover all emerging risks.
    6. Risk Communication: To manage risks in an organisation, there must be good communication between the different departments. It maintains openness in handling risks, the planned measures to prevent them from materialising, and the changes in their status.

    Types of Risks in Software Engineering:

    1. Technical Risks

    Technological risks are related to the selection, use, and methodology of technology that shall be used in software development. Such risks may be associated with the technology selection, the application’s intricacy, and performance issues.

    Technology Changes

    The software industry is highly volatile, and new solutions can often appear. Implementing modern technologies can sometimes prove beneficial, but it is also risky. Integration problems can occur with new or developing technology, compatibility problems with some developing technology, and training problems can occur to accommodate some new or developing technology. When the programmer changes the current programming language or framework mid-project, the programmer will most likely encounter new and unfamiliar bugs and may take time to develop solutions.

    Software Complexity

    Software systems being used and functional means that systems become more extensive and complex, thus increasing difficulty in designing and implementing the systems. High complexity may also render the software tricky to comprehend, test, and alter, and defects may increase.

    Performance Issues

    Operational risks relate to the software’s capacity to deliver the requisite performance characteristics, including reaction time, quantity constitutive throughput, and scalability. Negativity impacts the users, suboptimal and unstable system functioning, and the inability to provide necessary processing to high loads.

    2. Project Management Risks

    It is a circumstance within a project resulting from activities carried out about its planning, execution, and control. When it comes to project risks, there is one thing that should be understood: these threats can affect the schedule of the project, cause consumption of resources, and increase expenses.

    Schedule Slippage

    The following factors may result in extending project time horizons: new requirements, risks and problems, and the ineffectiveness of one’s management. There is always a danger that, due to such slippages, a project may run out of time and cost more than planned, or worse, stakeholders may lose their confidence in project managers.

    Resource Shortages

    Resource scarcity can be defined as the unavailability of human, financial, or other resources needed for a given project. A lack of resources can slow down the processes within the project, combine its quality, and put pressure on the project team members.

    Budget Overruns

    Cost control issues can be summarised as a situation whereby the project’s total cost is beyond the planned or expected budget because of wrong estimation, changes in the project scope, or the discovery of new activities that require funding. Going over the cost can put the project’s financial sustainability at risk and result in clients’ or users’ discontent.

    3. Organizational Risks

    Project risks relate to the operational and organisational aspects of the venture that implements the software project. These risks can occur because of activities such as conflicts, changes in management, and restructurings.

    Stakeholder Conflicts

    Criticisms can also emanate from conflicts or differences of opinion from the various stakeholders on goals, objectives, or specifications. These disputes may result in time extension for the specific project, high costs, and the absence of a supposed single vision.

    Management Changes

    Fluctuation at the leadership or core management levels can interfere with the project’s continuity and disturb various decision-making propositions. When management changes, there is always a shift in organisational focus and direction, objective changes, and sometimes a lack of project experience.

    Organisational Restructuring

    Transforming refers to significant modifications that may occur in the formal structure, business practices, or strategies within the company and may affect the project. A common consequence of restructuring is resource redistribution, a shift in the scale of projects, and the possible deterioration of working relations.

    4. External Risks

    External risks affect the projects external to the organisation and are not easily influenced by the project team. It may be around changes in regulations that govern the company’s operations, fluctuations in the market, and disasters such as floods, among others.

    Regulatory Changes

    New policies, either in Government, standards, codes of practice, or any other, may impact the requirements or the constraints placed on the project. That requires changes in the software, new compliance activities, and costs.

    Market Fluctuations

    Fluctuations in the market indices, for instance, depression or changes in the market about the project feasibility and success rate. That is why many factors can influence the project process, including, for example, changes in the market that can cause fluctuations in funding for a particular project, as well as changes in its priority or the scope of work to be done.

    Natural Disasters

    Project activities may be affected by Earthquakes, floods, or hurricanes. Catastrophes may occur and affect schedules and assets, and more money may be required to restore and maintain the business.

    Risk Identification

    Risk identification in software engineering forms one of the steps in risk management processes. It entails the identification of risks that are likely to hurt the project in question. Thus, risk identification allows the project teams to tackle problems properly before they worsen.

    Techniques for Identifying Risks

    1. Brainstorming: Risk identification is the imaginative process where an organisation generates ideas concerning potential threats. Redefining the goals and objectives would help the project succeed. Promoting the cross-talk and exchange of ideas. A technique of outlining all possible risks without the necessity of making an instant appraisal and assessment of each. Arranging the risks that have been identified for further classification to be done. Brainstorming assists in amplifying available and unique knowledge and ideas to find as many risks as possible that may not be apparent at first.
    2. Delphi Technique: The Delphi technique is carried out by independent and unbiased experts who do not disclose their identities and give possible risks. Picking people with different experiences and codified information in the field. Surveys should be conducted in several cycles, during which respondents are entitled to mention threats and make recommendations.
    3. Checklists are preset lists of typical hazards that can impact software projects. They rely on historical facts and the standard practices in the business world. Reflecting on all the categories raised when checking on a detailed risk list.
    4. Historical Data Analysis: Historical Data Analysis presupposes the investigation of past performed work, which may embrace risks that have taken place. Original data generated from the project, including risk logs and project post-mortem. Evaluating the occurrences, consequences, and possible sources of previous risk. Finding specificities and regularities may point to risks in the current project. Historical Data Analysis uses the information gained from previous events, and it helps organisations avoid specific dangers.
    5. SWOT Analysis: SWOT Analysis evaluates the company’s internal capabilities and lack of them to identify the opportunities and threats in its business environment that affect the project.

    Tools for Risk Identification

    1. Risk Breakdown Structure (RBS): The Risk Breakdown Structure (RBS) refers to a tree-structured list of risks classified according to types. Evaluating priorities of the risks and establishing the basic risk classes (technical, project, organisational).
    2. Cause and Effect Diagrams: Fishbone or Ishikawa Diagrams relate potential causes to their consequences and help make the relationship between them easily understandable.
    3. Risk Registers: Based on the current literature, a Risk Register can be described as a documented record of identified risks and their characteristics and responses toward those risks.

    Risk Analysis and Evaluation

    Qualitative Risk Analysis

    1. Probability and Impact Matrix

    A probability and impact matrix, another elementary but effective tool, ranks risks about probability and impact. The matrix generally comprises a chart with risks being plotted in it.

    • Probability: The chance that the risk will happen. This is usually rated on a scale (low, medium, high).
    • Impact: The possible consequences of facing the actual probability of the risk. This is also rated on the scale (insignificant, moderate, and significant).

    2. Risk Urgency Assessment

    Risk: Risk Urgency Assessment is concerned with determining the risk’s potential to impact the project and the time the risk could occur. It assists in identifying threats that require urgent attention and thus prioritising them.

    Quantitative Risk Analysis

    1. Monte Carlo Simulation

    Monte Carlo Simulation is a method of prediction that utilises quantitative analysis of risk impacts in a project. When one sets different inputs and conducts several simulations, it gives a probability view of risks.

    • Define Variables: Acknowledge and describe the drivers that characterise a project (time, cost, quality, resources, etc. ).
    • Assign Probability Distributions: Following this, one can assign probability distributions to these variables by using records or relying on the heuristics.
    • Run Simulations: The method is to apply software, which means running thousands of simulations with inputs varied randomly based on their distribution is possible.

    2. Decision Tree Analysis

    It was discovered that Decision Tree Analysis is a graphical technique for making decisions in conditions of risk. It entails creating a chart of various decision trees, their consequences, and the likelihood of the outcome and the effect.

    • Define Decision Points: Determine critical decisions that can be made throughout the project.
    • Map Decision Paths: I will create branches for each possible decision and all corresponding outcomes.
    • Assign Probabilities: Conclusively, estimating the probability of each possible outcome is possible.
    • Calculate Expected Values: Calculate the expected worth for each decision course by multiplying the likelihoods with the extent of the outcome.

    3. Sensitivity Analysis

    Sensitivity Analysis analyses variations in these project characteristics on the result. It determines the height of vulnerability and the exposure level of each variable influencing project risk most heavily.

    • Identify Variables: Key project variables, such as cost and time spent on the project.
    • Change Variables: Make systematic alterations only to one fair condition at a time while there are no other affairs.
    • Measure Impact: Analyse the impact of each of those changes on the results achieved in the framework of projects.
    • Analyze Sensitivity: Determine which variables influence the project risk to the highest degree.

    Risk Prioritization:

    Therefore, Risk prioritisation forms part of risk management strategies that help the teams tackle the risks with the highest probability first.

    Risk Ranking Methods

    1. Risk Exposure Formula

    Risk Exposure (RE) is computed mathematically with the help of the Risk Exposure formula, which gives a quantitative measure of risk exposure. It is arrived at by the product of the likelihood of the risk event and the consequence of the risk event on the project.

    Risk Exposure (RE) = Probability of occurrence*Impact

    • Probability of Occurrence: This is the approximate probability that a specific risk will occur, and most often, they are presented in percentage.
    • Impact: The result indicates the approximate possible loss or harm in the event of the risk, which is sometimes described in terms of cost, time, or quality.

    2. Failure Mode and Effects Analysis or FMEA

    Failure Mode and Effects Analysis (FMEA) is a step-by-step process for failure identification in a system along with its consequences.

    • Identify Failure Modes: Enumerate all the scenarios by which a process, product, or system can go wrong.
    • Determine Effects: To evaluate each failure mode, the following should be answered:
    • Assign Severity Ratings: Categorise the failure modes’ impact on a severity scale.
    • Assign Occurrence Ratings: Approximate the occurrence of the described failure mode on a scale as well.
    • Assign Detection Ratings: Rate the effectiveness of preventing each failure mode from occurring before it results in a problem on the same scale.
    • Calculate Risk Priority Number (RPN): The severity ranking, occurrence frequency, and detection efficiency are multiplied to arrive at the RPN for each failure mode.

    RPN = Severity*Occurrence*Detection

    Prioritisation Techniques

    Pareto Analysis

    Pareto Analysis, which follows the Pareto principle (80/ 20 rule), enables one to determine the few significant risks that can cause the most probable issues.

    • List Risks: It is also essential to list all possible risks.
    • Quantify Impact: Evaluate the effect of each risk. Usually, it’s presented in the form of cost, time, or how frequently the event can occur.
    • Sort Risks: Sort possible risks in decreasing order due to the impact estimation.
    • Cumulative Impact: Determine the added-up effect of the risks that have been identified.
    • Identify Top Risks: Learn which risks will have the most significant impacts and determine that, typically, these are the first 20% of risks that create approximately 80% of the potential losses.

    Risk Score Calculation

    Risk Score Calculation provides several risk scores depending on specified factors such as probability and consequence.

    • Define Criteria: Set out guidelines to ascertain risks, usually using risk probability and severity parameters.
    • Assign Scores: Assign a scale of 1 to 5 to each risk for each criterion you have come up with.
    • Calculate Risk Score: Average/Total the score of each risk to arrive at its overall risk score.

    Risk Response Planning:

    Risk response planning is the step that involves identifying the measures that can increase the likelihood of achieving the project’s aims as well as matching the threats that may hinder its success. It requires assessing what risks must be managed and what responses must be made to them.

    Strategies for Risk Response

    • Avoidance: This requires alteration of the project plan either to avoid the risk or the effects of risk. It can consist of changing requirements, better communication channels, or acquiring more information to avoid the risk. For example, a particular project can have methods of data backup and recovery to reduce the chances of data loss.
    • Mitigation: Mitigation seeks to minimise the probability of a risk or the severity of its effect. This can include practices that will reduce the impact of the risk factor or its occurrence probability. For instance, a team could incorporate other systems and prop up strict maintenance schedules to avoid such system failure.
    • Transfer: Risk transfer is one of the risk response strategies wherein the consequences of a risk are moved to the third party. This does not remove the risk but ensures that another firm bears the consequences. These include buying insurance, hiring third parties to provide some of the project’s features, or operating under a contract with conditions that shift risk.
    • Acceptance: It means understanding the volatility of the situation and choosing to live with it should anything go wrong. This approach is often used where the cost of elimination or reduction is higher than the resulting consequence. For instance, a team may agree to live with minor faults in secondary product functionality that is not a priority since the cost of squashing them is prohibitive.

    Risk Monitoring and Control:

    Risk monitoring and control are some of the critical parts of risk management systems within software engineering. They ensure that the identified risks are monitored, assessed, and managed not only at the initial and then at the end of the SDLC but throughout the entire process. It’s essential in sustaining project objectives, quality, and time considerations due to its distinct features that differ from the traditional approach.

    1. Continuous Risk Monitoring

    Risk assessment focuses on regularly observing and evaluating possible risks that may influence a particular software project. This is a forward-thinking approach aimed at ensuring that all risks likely to occur within an organisation are dealt with well from the word go.

    • Risk Tracking: Periodically amending the risk register by updating it with the status of the identified risks.
    • Environmental Scanning: Keeping track of alterations that may occur within the sphere of the project that are unfavourable and potentially pose new risks to the work being done.
    • Trend Analysis: The evaluation of the risks to identify potential problems that may occur in the future and adapt them to the existing ones.

    2. Risk Audits

    Risk audits are proactive activities of enterprises aimed at assessing the organisation and efficiency of risk management within the scope of a specific project.

    • Compliance Checks: Ensuring that the risk management activities comply with the standards and policies.
    • Effectiveness Assessment: Assessing the effectiveness of the risk responses and the effectiveness of the implemented risk management strategies.
    • Improvement Recommendations: The need to develop recommendations and areas in processes to work on is well identified.

    3. Status Meetings and Reviews

    This form of communication should also include regular status meetings and reviews concerning risk management responsibilities. These statuary meetings enable the provision of information and ensure all the stakeholders are informed.

    • Progress Reports: Informing on changes in risks’ status and on the efficiency of actions aimed at their mitigation.
    • Stakeholder Involvement: Informing the project’s essential stakeholders about risks and their effects on the project.
    • Decision Making: Decision-making support based on the current risk data and their analysis.

    4. Risk Reassessment

    Risk review is the process of carrying out the risk register check at least once in a while to add new notes or decrees about this risk in the project environment. This makes it possible for that risk management plan to be current and efficient.

    • Periodic Reviews: Carry out periodic risk register updates to arrive at new risks and re-evaluate old risks.

    5. Performance Metrics and KPIs

    Performance indicators are applied to evaluate risk management efficiency, and Key performance indicators are a particular type of performance indicator. It gives quantitative data that one can use to mark the effectiveness in dealing with risks.

    • Risk Exposure: Estimating the threat or the potential harm of identified risks to the project.
    • Risk Mitigation Success Rate: Measuring the raw incidence of the successfully mitigated risks.
    • Time to Mitigate Risks: Measuring the average days it takes to address and respond to the risks.
    • Cost of Risk Management: Measuring the return on investment on the implemented risk management activities.

    Tools and Techniques for Risk Management:

    Risk Management Software

    Specifically, Risk Management Software is created to introduce, evaluate, and control risks at any phase of the software development.

    • RiskWatch: Some key risk assessment features are real-time data with robust risk management tools and features incorporated in the tool series.
    • Active Risk Manager (ARM): Leads the integration of risk management processes into the project management process integrations.
    • Palisade’s DecisionTools Suite has risk assessment and decision-making capabilities such as Monte Carlo simulation and risk probability.

    Tools for Project Management with Risk Management Aspects

    Today, most PMSs have a built-in risk management function that factors this process into the overall PM framework.

    • Microsoft Project: Has integrated functions for risk management integrated within comprehensive project planning tools.
    • JIRA: Out of all the available tools, JIRA, commonly used in agile development, is equipped with risk tracking as one of the features of managing projects and issues.
    • Trello: This is based on boards and cards for handling tasks and risks with extensible modules for final enhanced risk management.

    Collaborative Tools for Risk Tracking

    Information technologies improve the communication process between the team members and increase understanding of the existing threats and appropriate measures.

    • Slack: There are channels and threads for team discussions and possible integrations with the risk management tools.
    • Confluence: An open-access platform that allows the team to record the risks, report changes, and work together on the measures needed to address the risks.
    • Microsoft Teams: It enhances communication and cooperation by inserting options for document sharing and project management.
  • Putnam Resource Allocation Model

    The Lawrence Putnam model describes the time and effort requires finishing a software project of a specified size. Putnam makes a use of a so-called The Norden/Rayleigh Curve to estimate project effort, schedule & defect rate as shown in fig:

    Putnam Resource Allocation Model

    Putnam noticed that software staffing profiles followed the well known Rayleigh distribution. Putnam used his observation about productivity levels to derive the software equation:

    Putnam Resource Allocation Model

    The various terms of this expression are as follows:

    K is the total effort expended (in PM) in product development, and L is the product estimate in KLOC .

    td correlate to the time of system and integration testing. Therefore, td can be relatively considered as the time required for developing the product.

    Ck Is the state of technology constant and reflects requirements that impede the development of the program.

    Typical values of Ck = 2 for poor development environment

    Ck= 8 for good software development environment

    Ck = 11 for an excellent environment (in addition to following software engineering principles, automated tools and techniques are used).

    The exact value of Ck for a specific task can be computed from the historical data of the organization developing it.

    Putnam proposed that optimal staff develop on a project should follow the Rayleigh curve. Only a small number of engineers are required at the beginning of a plan to carry out planning and specification tasks. As the project progresses and more detailed work are necessary, the number of engineers reaches a peak. After implementation and unit testing, the number of project staff falls.

    Effect of a Schedule change on Cost

    Putnam derived the following expression:

    Putnam Resource Allocation Model

    Where, K is the total effort expended (in PM) in the product development

    L is the product size in KLOC

    td corresponds to the time of system and integration testing

    Ck Is the state of technology constant and reflects constraints that impede the progress of the program

    Now by using the above expression, it is obtained that,

    Putnam Resource Allocation Model

    For the same product size, C =L3 / Ck3 is a constant.

    Putnam Resource Allocation Model

    (As project development effort is equally proportional to project development cost)

    From the above expression, it can be easily observed that when the schedule of a project is compressed, the required development effort as well as project development cost increases in proportion to the fourth power of the degree of compression. It means that a relatively small compression in delivery schedule can result in a substantial penalty of human effort as well as development cost.

    For example, if the estimated development time is 1 year, then to develop the product in 6 months, the total effort required to develop the product (and hence the project cost) increases 16 times.

  • COCOMO Model in Software Engineering

    Before beginning new software projects, it is essential to understand the time and cost involved in software development today. The “Constructive Cost Model (COCOMO)” is one effective cost-estimating method widely used in several software projects.

    In 1981, Barry W. Boehm introduced the COCOMO model, a procedural approach to software cost assessment. The Effort, average team size, development time, and Effort needed to construct a software project are all predicted using this cost-estimating model. This adds transparency to the model, which helps software managers understand why the model delivers the estimations it does. Furthermore, a waterfall model lifetime was initially based on the basic COCOMO.

    This tutorial will cover an overview of the COCOMO model, its generic types, the fundamental computation of the COCOMO-based cost estimating approach, its extension, and its relative advantages and disadvantages.

    The necessary steps in this model are:

    1. Get an initial estimate of the development effort from evaluation of thousands of delivered lines of source code (KDLOC).
    2. Determine a set of 15 multiplying factors from various attributes of the project.
    3. Calculate the effort estimate by multiplying the initial estimate with all the multiplying factors i.e., multiply the values in step1 and step2.

    The initial estimate (also called nominal estimate) is determined by an equation of the form used in the static single variable models, using KDLOC as the measure of the size. To determine the initial effort Ei in person-months the equation used is of the type is shown below

    Ei=a*(KDLOC)b

    The value of the constant a and b are depends on the project type.

    In COCOMO, projects are categorized into three types:

    1. Organic
    2. Semidetached
    3. Embedded

    1.Organic: A development project can be treated of the organic type, if the project deals with developing a well-understood application program, the size of the development team is reasonably small, and the team members are experienced in developing similar methods of projects. Examples of this type of projects are simple business systems, simple inventory management systems, and data processing systems.

    2. Semidetached: A development project can be treated with semidetached type if the development consists of a mixture of experienced and inexperienced staff. Team members may have finite experience in related systems but may be unfamiliar with some aspects of the order being developed. Example of Semidetached system includes developing a new operating system (OS), a Database Management System (DBMS), and complex inventory management system.

    3. Embedded: A development project is treated to be of an embedded type, if the software being developed is strongly coupled to complex hardware, or if the stringent regulations on the operational method exist. For Example: ATM, Air Traffic control.

    For three product categories, Bohem provides a different set of expression to predict effort (in a unit of person month)and development time from the size of estimation in KLOC(Kilo Line of code) efforts estimation takes into account the productivity loss due to holidays, weekly off, coffee breaks, etc.

    Top Techniques for Applying the COCOMO Model

    By following the methods listed below, any software developer who uses the COCOMO model may guarantee a proper workflow and arrive at the most accurate and efficient cost estimation.

    • utilizing precise historical data
    • modifying the model to meet precise project requirements
    • to routinely update the estimations
    • to incorporate tools for project management
    • use additional estimating methods in conjunction with the present model to improve judgment
    • by using practical measures to validate the outcomes
    • Teach the group how to use the model
    • Not all assumptions and modifications
    • Hold review meetings on a regular basis
    • Assure ongoing participation from stakeholders

    According to Boehm, software cost estimation should be done through three stages:

    1. Basic Model
    2. Intermediate Model
    3. Detailed Model

    1. Basic COCOMO Model: The basic COCOMO model provide an accurate size of the project parameters. The following expressions give the basic COCOMO estimation model:

                    Effort=a1*(KLOC) a2 PM
                    Tdev=b1*(efforts)b2 Months

    Where

    KLOC is the estimated size of the software product indicate in Kilo Lines of Code,

    a1,a2,b1,b2 are constants for each group of software products,

    Tdev is the estimated time to develop the software, expressed in months,

    Effort is the total effort required to develop the software product, expressed in person months (PMs).

    Estimation of development effort

    For the three classes of software products, the formulas for estimating the effort based on the code size are shown below:

    Organic: Effort = 2.4(KLOC) 1.05 PM

    Semi-detached: Effort = 3.0(KLOC) 1.12 PM

    Embedded: Effort = 3.6(KLOC) 1.20 PM

    Estimation of development time

    For the three classes of software products, the formulas for estimating the development time based on the effort are given below:

    Organic: Tdev = 2.5(Effort) 0.38 Months

    Semi-detached: Tdev = 2.5(Effort) 0.35 Months

    Embedded: Tdev = 2.5(Effort) 0.32 Months

    Some insight into the basic COCOMO model can be obtained by plotting the estimated characteristics for different software sizes. Fig shows a plot of estimated effort versus product size. From fig, we can observe that the effort is somewhat superliner in the size of the software product. Thus, the effort required to develop a product increases very rapidly with project size.

    COCOMO Model

    The development time versus the product size in KLOC is plotted in fig. From fig it can be observed that the development time is a sub linear function of the size of the product, i.e. when the size of the product increases by two times, the time to develop the product does not double but rises moderately. This can be explained by the fact that for larger products, a larger number of activities which can be carried out concurrently can be identified. The parallel activities can be carried out simultaneously by the engineers. This reduces the time to complete the project. Further, from fig, it can be observed that the development time is roughly the same for all three categories of products. For example, a 60 KLOC program can be developed in approximately 18 months, regardless of whether it is of organic, semidetached, or embedded type.

    COCOMO Model

    From the effort estimation, the project cost can be obtained by multiplying the required effort by the manpower cost per month. But, implicit in this project cost computation is the assumption that the entire project cost is incurred on account of the manpower cost alone. In addition to manpower cost, a project would incur costs due to hardware and software required for the project and the company overheads for administration, office space, etc.

    It is important to note that the effort and the duration estimations obtained using the COCOMO model are called a nominal effort estimate and nominal duration estimate. The term nominal implies that if anyone tries to complete the project in a time shorter than the estimated duration, then the cost will increase drastically. But, if anyone completes the project over a longer period of time than the estimated, then there is almost no decrease in the estimated cost value.

    Example1: Suppose a project was estimated to be 400 KLOC. Calculate the effort and development time for each of the three model i.e., organic, semi-detached & embedded.

    Solution: The basic COCOMO equation takes the form:

                    Effort=a1*(KLOC) a2 PM
                    Tdev=b1*(efforts)b2 Months
                    Estimated Size of project= 400 KLOC

    (i)Organic Mode

                    E = 2.4 * (400)1.05 = 1295.31 PM
                    D = 2.5 * (1295.31)0.38=38.07 PM

    (ii)Semidetached Mode

                    E = 3.0 * (400)1.12=2462.79 PM
                    D = 2.5 * (2462.79)0.35=38.45 PM

    (iii) Embedded Mode

                    E = 3.6 * (400)1.20 = 4772.81 PM
                    D = 2.5 * (4772.8)0.32 = 38 PM

    Example2: A project size of 200 KLOC is to be developed. Software development team has average experience on similar type of projects. The project schedule is not very tight. Calculate the Effort, development time, average staff size, and productivity of the project.

    Solution: The semidetached mode is the most appropriate mode, keeping in view the size, schedule and experience of development time.

    Hence       E=3.0(200)1.12=1133.12PM
                    D=2.5(1133.12)0.35=29.3PM

    COCOMO Model

                P = 176 LOC/PM

    2. Intermediate Model: The basic Cocomo model considers that the effort is only a function of the number of lines of code and some constants calculated according to the various software systems. The intermediate COCOMO model recognizes these facts and refines the initial estimates obtained through the basic COCOMO model by using a set of 15 cost drivers based on various attributes of software engineering.

    Classification of Cost Drivers and their attributes:

    (i) Product attributes –

    • Required software reliability extent
    • Size of the application database
    • The complexity of the product

    Hardware attributes –

    • Run-time performance constraints
    • Memory constraints
    • The volatility of the virtual machine environment
    • Required turnabout time

    Personnel attributes –

    • Analyst capability
    • Software engineering capability
    • Applications experience
    • Virtual machine experience
    • Programming language experience

    Project attributes –

    • Use of software tools
    • Application of software engineering methods
    • Required development schedule

    The cost drivers are divided into four categories:

    COCOMO Model
    COCOMO Model

    Intermediate COCOMO equation:

                    E=ai (KLOC) bi*EAF
                    D=ci (E)di

    Coefficients for intermediate COCOMO

    Projectaibicidi
    Organic2.41.052.50.38
    Semidetached3.01.122.50.35
    Embedded3.61.202.50.32

    3. Detailed COCOMO Model:Detailed COCOMO incorporates all qualities of the standard version with an assessment of the cost driver?s effect on each method of the software engineering process. The detailed model uses various effort multipliers for each cost driver property. In detailed cocomo, the whole software is differentiated into multiple modules, and then we apply COCOMO in various modules to estimate effort and then sum the effort.

    The Six phases of detailed COCOMO are:

    1. Planning and requirements
    2. System structure
    3. Complete structure
    4. Module code and test
    5. Integration and test
    6. Cost Constructive model

    The effort is determined as a function of program estimate, and a set of cost drivers are given according to every phase of the software lifecycle.

    COCOMO Extension

    It is worthwhile for you to conduct more studies on the extension of this successful COCOMO model, even if this page just discusses the COCOMO I model (1981). The expansion of COCOMO I, which is utilized in different software development process categories, including AgileIterative, and spiral waterfall models, is a clear illustration of COCOMO II (1995).

    In addition to the COCOMO I and COCOMO II models, a number of cost-estimating models are also being created, such as the Constructive Phased Schedule & Effort Model (COPSEMO) and the Constructive Rapid Application Development schedule estimate model (CORADMO).

    Pros of the COCOMO Model

    • Unlike other models like SLIM, COCOMO is straightforward to understand how it operates.
    • The estimator may better grasp the effects of many elements influencing project costs by using drivers.
    • COCOMO offers suggestions for historical initiatives.
    • It is simple to estimate the project’s overall cost using the COCOMO model.
    • Understanding the effects of the many elements that influence project crises is made much easier by the drivers.

    Cons of the COCOMO Model

    • When most effort estimates are needed early in the project, it is challenging to estimate KDSI.
    • In actuality, KDSI is a length measurement rather than a size one.
    • extremely susceptible to the development mode being incorrectly classified.
    • A key factor in success is adapting the model to the organization’s requirements utilizing past data, which isn’t always accessible.
    • It restricts the software expenses’ accuracy.
    • COCOMO disregards consumer knowledge, abilities, and collaboration.
    • Additionally, agile is centered on quick feedback cycles and taking advantage of fresh ideas, whereas COCOMO is based on designing the design in advance.

    Examples Based on Case Studies and Real-World Applications

    • NASA Projects: NASA has estimated costs and timelines for several space exploration software projects using the COCOMO model.
    • Defence Systems: The U.S. Department of Defense primarily uses defense systems to guarantee precise budgeting and prompt fulfillment of software project needs.
    • Telecommunications: Telecom corporations use COCOMO to calculate the time and expenses involved in creating sophisticated network management software.
    • IBM: IBM used COCOMO to efficiently manage project timetables, optimize resources, and create extensive enterprise software systems.
    • Healthcare: In healthcare IT projects, electronic health record systems and other vital applications are planned and developed using COCOMO.
  • Software Cost Estimation

    For any new software project, it is necessary to know how much it will cost to develop and how much development time will it take. These estimates are needed before development is initiated, but how is this done? Several estimation procedures have been developed and are having the following attributes in common.

    1. Project scope must be established in advanced.
    2. Software metrics are used as a support from which evaluation is made.
    3. The project is broken into small PCs which are estimated individually.
      To achieve true cost & schedule estimate, several option arise.
    4. Delay estimation
    5. Used symbol decomposition techniques to generate project cost and schedule estimates.
    6. Acquire one or more automated estimation tools.

    Uses of Cost Estimation

    1. During the planning stage, one needs to choose how many engineers are required for the project and to develop a schedule.
    2. In monitoring the project’s progress, one needs to access whether the project is progressing according to the procedure and takes corrective action, if necessary.

    Cost Estimation Models

    A model may be static or dynamic. In a static model, a single variable is taken as a key element for calculating cost and time. In a dynamic model, all variable are interdependent, and there is no basic variable.

    Software Cost Estimation

    Static, Single Variable Models: When a model makes use of single variables to calculate desired values such as cost, time, efforts, etc. is said to be a single variable model. The most common equation is:

                                    C=aLb

    Where    C = Costs
                    L= size
                    a and b are constants

    The Software Engineering Laboratory established a model called SEL model, for estimating its software production. This model is an example of the static, single variable model.

                    E=1.4L0.93
                    DOC=30.4L0.90
                    D=4.6L0.26

    Where    E= Efforts (Person Per Month)
                    DOC=Documentation (Number of Pages)
                    D = Duration (D, in months)
                    L = Number of Lines per code

    Static, Multivariable Models: These models are based on method (1), they depend on several variables describing various aspects of the software development environment. In some model, several variables are needed to describe the software development process, and selected equation combined these variables to give the estimate of time & cost. These models are called multivariable models.

    WALSTON and FELIX develop the models at IBM provide the following equation gives a relationship between lines of source code and effort:

                    E=5.2L0.91In the same manner duration of development is given by

                    D=4.1L0.36

    The productivity index uses 29 variables which are found to be highly correlated productivity as follows:

    Software Cost Estimation

    Where Wi is the weight factor for the ithvariable and Xi={-1,0,+1} the estimator gives Xione of the values -1, 0 or +1 depending on the variable decreases, has no effect or increases the productivity.

    Example: Compare the Walston-Felix Model with the SEL model on a software development expected to involve 8 person-years of effort.

    1. Calculate the number of lines of source code that can be produced.
    2. Calculate the duration of the development.
    3. Calculate the productivity in LOC/PY
    4. Calculate the average manning

    Solution:

    The amount of manpower involved = 8PY=96persons-months

    (a)Number of lines of source code can be obtained by reversing equation to give:

    Software Cost Estimation

    Then

                    L (SEL) = (96/1.4)1⁄0.93=94264 LOC
                    L (SEL) = (96/5.2)1⁄0.91=24632 LOC

    (b)Duration in months can be calculated by means of equation

                    D (SEL) = 4.6 (L) 0.26
                                   = 4.6 (94.264)0.26 = 15 months
                    D (W-F) = 4.1 L0.36
                                   = 4.1 (24.632)0.36 = 13 months

    (c) Productivity is the lines of code produced per persons/month (year)

    Software Cost Estimation

    (d)Average manning is the average number of persons required per month in the project

    Software Cost Estimation
  • Software Project Planning

    A Software Project is the complete methodology of programming advancement from requirement gathering to testing and support, completed by the execution procedures, in a specified period to achieve intended software product.

    Need of Software Project Management

    Software development is a sort of all new streams in world business, and there’s next to no involvement in structure programming items. Most programming items are customized to accommodate customer’s necessities. The most significant is that the underlying technology changes and advances so generally and rapidly that experience of one element may not be connected to the other one. All such business and ecological imperatives bring risk in software development; hence, it is fundamental to manage software projects efficiently.

    Software Project Manager

    Software manager is responsible for planning and scheduling project development. They manage the work to ensure that it is completed to the required standard. They monitor the progress to check that the event is on time and within budget. The project planning must incorporate the major issues like size & cost estimation scheduling, project monitoring, personnel selection evaluation & risk management. To plan a successful software project, we must understand:

    • Scope of work to be completed
    • Risk analysis
    • The resources mandatory
    • The project to be accomplished
    • Record of being followed

    Software Project planning starts before technical work start. The various steps of planning activities are:

    Software Project Planning

    The size is the crucial parameter for the estimation of other activities. Resources requirement are required based on cost and development time. Project schedule may prove to be very useful for controlling and monitoring the progress of the project. This is dependent on resources & development time.

  • Cyclomatic Complexity

    Cyclomatic complexity is a software metric used to measure the complexity of a program. Thomas J. McCabe developed this metric in 1976.McCabe interprets a computer program as a set of a strongly connected directed graph. Nodes represent parts of the source code having no branches and arcs represent possible control flow transfers during program execution. The notion of program graph has been used for this measure, and it is used to measure and control the number of paths through a program. The complexity of a computer program can be correlated with the topological complexity of a graph.

    How to Calculate Cyclomatic Complexity?

    McCabe proposed the cyclomatic number, V (G) of graph theory as an indicator of software complexity. The cyclomatic number is equal to the number of linearly independent paths through a program in its graphs representation. For a program control graph G, cyclomatic number, V (G), is given as:

                  V (G) = E – N + 2 * P

    E = The number of edges in graphs G

    N = The number of nodes in graphs G

    P = The number of connected components in graph G.

    Example:

    Cyclomatic Complexity

    Properties of Cyclomatic complexity:

    Following are the properties of Cyclomatic complexity:

    1. V (G) is the maximum number of independent paths in the graph
    2. V (G) >=1
    3. G will have one path if V (G) = 1
    4. Minimize complexity to 10
  • Information Flow Metrics

    The other set of metrics we would live to consider are known as Information Flow Metrics. The basis of information flow metrics is found upon the following concept the simplest system consists of the component, and it is the work that these components do and how they are fitted together that identify the complexity of the system. The following are the working definitions that are used in Information flow:

    Component: Any element identified by decomposing a (software) system into it’s constituent’s parts.

    Cohesion: The degree to which a component performs a single function.

    Coupling: The term used to describe the degree of linkage between one component to others in the same system.

    Information Flow metrics deal with this type of complexity by observing the flow of information among system components or modules. This metrics is given by Henry and Kafura. So it is also known as Henry and Kafura’s Metric.

    This metrics is based on the measurement of the information flow among system modules. It is sensitive to the complexity due to interconnection among system component. This measure includes the complexity of a software module is defined to be the sum of complexities of the procedures included in the module. A process contributes complexity due to the following two factors.

    1. The complexity of the procedure code itself.
    2. The complexity due to the procedure’s connections to its environment. The effect of the first factor has been included through LOC (Line Of Code) measure. For the quantification of the second factor, Henry and Kafura have defined two terms, namely FAN-IN and FAN-OUT.

    FAN-IN: FAN-IN of a procedure is the number of local flows into that procedure plus the number of data structures from which this procedure retrieve information.

    FAN -OUT: FAN-OUT is the number of local flows from that procedure plus the number of data structures which that procedure updates.

    Procedure Complexity = Length * (FAN-IN * FANOUT)**2

    Information Flow Metrics
  • Data Structure Metrics

    Essentially the need for software development and other activities are to process data. Some data is input to a system, program or module; some data may be used internally, and some data is the output from a system, program, or module.

    Example:

    ProgramData InputInternal DataData Output
    PayrollName/Social Security No./Pay rate/Number of hours workedWithholding rates Overtime Factors Insurance Premium RatesGross Pay withholding Net Pay Pay Ledgers
    SpreadsheetItem Names/Item Amounts/Relationships among ItemsCell computations SubtotalSpreadsheet of items and totals
    Software PlannerProgram Size/No of Software developer on teamModel Parameter Constants CoefficientsEst. project effort Est. project duration

    That’s why an important set of metrics which capture in the amount of data input, processed in an output form software. A count of this data structure is called Data Structured Metrics. In these concentrations is on variables (and given constant) within each module & ignores the input-output dependencies.

    There are some Data Structure metrics to compute the effort and time required to complete the project. There metrics are:

    1. The Amount of Data.
    2. The Usage of data within a Module.
    3. Program weakness.
    4. The sharing of Data among Modules.

    1. The Amount of Data: To measure the amount of Data, there are further many different metrics, and these are:

    • Number of variable (VARS): In this metric, the Number of variables used in the program is counted.
    • Number of Operands (η2): In this metric, the Number of operands used in the program is counted.
      η2 = VARS + Constants + Labels
    • Total number of occurrence of the variable (N2): In this metric, the total number of occurrence of the variables are computed

    2. The Usage of data within a Module: The measure this metric, the average numbers of live variables are computed. A variable is live from its first to its last references within the procedure.

    Data Structure Metrics

    For Example: If we want to characterize the average number of live variables for a program having modules, we can use this equation.

    Data Structure Metrics

    Where (LV) is the average live variable metric computed from the ith module. This equation could compute the average span size (SP) for a program of n spans.

    Data Structure Metrics

    3. Program weakness: Program weakness depends on its Modules weakness. If Modules are weak(less Cohesive), then it increases the effort and time metrics required to complete the project.

    Data Structure Metrics

    Module Weakness (WM) = LV* γ

    A program is normally a combination of various modules; hence, program weakness can be a useful measure and is defined as:

    Data Structure Metrics

    Where

    WMi: Weakness of the ith module

    WP: Weakness of the program

    m: No of modules in the program

    4.There Sharing of Data among Module: As the data sharing between the Modules increases (higher Coupling), no parameter passing between Modules also increased, As a result, more effort and time are required to complete the project. So Sharing Data among Module is an important metrics to calculate effort and time.

    Data Structure Metrics
    Data Structure Metrics