Author: saqibkhan

  • Software Failure Mechanisms

    The software failure can be classified as:

    Transient failure: These failures only occur with specific inputs.

    Permanent failure: This failure appears on all inputs.

    Recoverable failure: System can recover without operator help.

    Unrecoverable failure: System can recover with operator help only.

    Non-corruption failure: Failure does not corrupt system state or data.

    Corrupting failure: It damages the system state or data.

    Software failures may be due to bugs, ambiguities, oversights or misinterpretation of the specification that the software is supposed to satisfy, carelessness or incompetence in writing code, inadequate testing, incorrect or unexpected usage of the software or other unforeseen problems.

    Hardware vs. Software Reliability

    Hardware ReliabilitySoftware Reliability
    Hardware faults are mostly physical faults.Software faults are design faults, which are tough to visualize, classify, detect, and correct.
    Hardware components generally fail due to wear and tear.Software component fails due to bugs.
    In hardware, design faults may also exist, but physical faults generally dominate.In software, we can simply find a strict corresponding counterpart for “manufacturing” as the hardware manufacturing process, if the simple action of uploading software modules into place does not count. Therefore, the quality of the software will not change once it is uploaded into the storage and start running
    Hardware exhibits the failure features shown in the following figure:
    Software Failure Mechanisms
    It is called the bathtub curve. Period A, B, and C stand for burn-in phase, useful life phase, and end-of-life phase respectively.
    Software reliability does not show the same features similar as hardware. A possible curve is shown in the following figure:
    Software Failure Mechanisms
    If we projected software reliability on the same axes.

    There are two significant differences between hardware and software curves are:

    One difference is that in the last stage, the software does not have an increasing failure rate as hardware does. In this phase, the software is approaching obsolescence; there are no motivations for any upgrades or changes to the software. Therefore, the failure rate will not change.

    The second difference is that in the useful-life phase, the software will experience a radical increase in failure rate each time an upgrade is made. The failure rate levels off gradually, partly because of the defects create and fixed after the updates.

    The upgrades in above figure signify feature upgrades, not upgrades for reliability. For feature upgrades, the complexity of software is possible to be increased, since the functionality of the software is enhanced. Even error fixes may be a reason for more software failures if the bug fix induces other defects into the software. For reliability upgrades, it is likely to incur a drop in software failure rate, if the objective of the upgrade is enhancing software reliability, such as a redesign or reimplementation of some modules using better engineering approaches, such as clean-room method.

    A partial list of the distinct features of software compared to hardware is listed below:

    Software Failure Mechanisms

    Failure cause: Software defects are primarily designed defects.

    Wear-out: Software does not have an energy-related wear-out phase. Bugs can arise without warning.

    Repairable system: Periodic restarts can help fix software queries.

    Time dependency and life cycle: Software reliability is not a purpose of operational time.

    Environmental factors: Do not affect Software reliability, except it may affect program inputs.

    Reliability prediction: Software reliability cannot be predicted from any physical basis since it depends entirely on human factors in design.

    Redundancy: It cannot improve Software reliability if identical software elements are used.

    Interfaces: Software interfaces are merely conceptual other than visual.

    Failure rate motivators: It is generally not predictable from analyses of separate statements.

    Built with standard components: Well-understood and extensively tested standard element will help improve maintainability and reliability. But in the software industry, we have not observed this trend. Code reuse has been around for some time but to a minimal extent. There are no standard elements for software, except for some standardized logic structures.

  • Software Reliability in Software Engineering

    Introduction

    Software Reliability means Operational reliability. It is described as the ability of a system or component to perform its required functions under static conditions for a specific period.

    Software reliability is also defined as the probability that a software system fulfills its assigned task in a given environment for a predefined number of input cases, assuming that the hardware and the input are free of error.

    Software Reliability is an essential connect of software quality, composed with functionality, usability, performance, serviceability, capability, installability, maintainability, and documentation. Software Reliability is hard to achieve because the complexity of software turn to be high. While any system with a high degree of complexity, containing software, will be hard to reach a certain level of reliability, system developers tend to push complexity into the software layer, with the speedy growth of system size and ease of doing so by upgrading the software.

    For example, large next-generation aircraft will have over 1 million source lines of software on-board; next-generation air traffic control systems will contain between one and two million lines; the upcoming International Space Station will have over two million lines on-board and over 10 million lines of ground support software; several significant life-critical defense systems will have over 5 million source lines of software. While the complexity of software is inversely associated with software reliability, it is directly related to other vital factors in software quality, especially functionality, capability, etc.

    Techniques of Software Reliability

    Two distinct Models including are used to calculate software reliability:

    1. Prediction Modeling
    2. Estimation Modeling

    Prediction Modeling

    As the name suggests, the Prediction Model is constructed using presumptions about the specifications needed to create the specified software program. Among these assumptions are the information and materials from historical occurrences or the software’s operational features. Because it is thought to be extremely unreliable to predict during or after development, it is carried out during the design phase or before the development process begins. Forecasts are not based on the current situation but rather on the idea that the application will be used at some point in the future.

    Estimation Modeling

    The estimation model is constructed using the current data results from the development or testing processes and is based on several software features. It is completed later in the software development life cycle, when all of the required software components have been installed. The software’s reliability is estimated using the current or immediately subsequent time periods. Several software development analysts have also developed different models such as the Basic Execution Time Model Shooman Model, the Bug Seeding Model Logarithmic Poisson Time Model, the Littlewood – Verrall Model, the Goel – Okumoto Model, the Musa – Okumoto Model, and the Jelinski – Moranda Model.

    Metrics for software Reliability

    The software system applications’ reliability is measured and derived using software reliability metrics, which can be expressed numerically or in any other way. The system behaviour, the software’s business objective, the anticipated recovery time, the likelihood of failure, the kinds of users who use the program, etc., can all influence the kind of metric that the application developers decide on. The following are the kinds of assessments that professionals in software application development frequently use in real time to gauge software reliability.

    Based on Requirements

    The client’s actual needs can be discovered in the software development specification documentation. It usually outlines the requirements and expectations for developing the software, including its functional features, non-functional look, and dependencies on other related systems. It is employed to identify the functionality of the software.

    It is used to address non-functional aspects such as the software’s look compatibility performance validation integrating capabilities load passed through the program in real-time, etc. The process outcome should demonstrate that there are no differences between the client’s needs and the software development team’s comprehension.

    Based on Design and Code

    The action plan assesses software reliability during the design and coding phases. The software component usability features and software size are the domains where the estimation is used. Maintaining the system in smaller units is crucial in order to significantly lower the likelihood of accidents. The reliability scale will operate as needed for the analysis once the fault occurrences are contained. Multiple components with easily comprehensible software units are preferable to a single large complex system.

    Testing Reliability Metrics

    During the testing process, the dependability metrics are divided into two pieces. One is validation to ensure that the functional behaviour of the built application matches the requirements specified in the documentation. The other portion evaluates the program’s functions and performance. The first is referred to as a Black Box Testing Method, and the latter is known as White Box Testing, which is usually performed by the developer.

    Under the guise of the client’s requirement specifications, the testing procedure is conducted against the previously placed documentation. This means that any discrepancy at this point will be reported fixed as part of the bug fix and monitored using a defect life cycle. In order to ensure that every aspect of the developed system is validated, it is used to accomplish an efficient method of validating the entire system.

    The following are the approaches employed, based on the required type of metric analysis, during the above-mentioned software development phases:

    • Mean Time to Failure – (Total time) / (Number of units tested)
    • Mean Time to Repair – (Total time for maintenance) / (total repairs)
    • Mean Time Between Failure – MTTF + MTTR
    • Rate of Occurrence of Failure – 1 / (MTTF)
    • Probability of Failure – (Number of Failures) / (Total cases considered)
    • Availability – MTTF / MTBF

    Example to Implement Software Reliability:

    Let us consider the Mean Time to Failure computation, which requires both the total time and the number of units tested.

    For example, if the values are as follows, the MTTF is calculated as:

    MTTF = (total time) / (number of units tested)

    = 100 / 40

    = 2.5

    Factors Affecting Software Reliability

    A user’s assessment of a software program’s dependability is based on two types of data.

    • The quantity of errors in the software
    • The way users interact with the system. This is referred to as the operational profile.

    The following factors affect the number of faults in a system

    • The code’s size and complexity.
    • Features of the employed development process.
    • Training education and experience of development staff.
    • Operating environment.

    Software Reliability Applications

    There are several uses for software reliability:

    1. Technologies related to software engineering are compared.
      • How much does it cost to adopt a technology?
      • In terms of price and quality what is the technologies yield?
    2. Monitoring the system testing process: The failure intensity metric provides information about the systems current quality a high intensity indicates that additional testing is necessary.
    3. Controlling the system in use: The degree of software modification required for maintenance has an impact on the systems dependability.
    4. Improved understanding of software development processes: We can gain a better understanding of software development processes by quantifying quality.

    Software Reliability Benefits

    Including software reliability in the software development process has the following benefits:

    • Data preservation makes use of software reliability.
    • Avoiding software failure is beneficial.
    • The process of upgrading the system is simple.
    • More productivity is the result of improved system performance and efficiency.
  • Structured Programming

    In structured programming, we sub-divide the whole program into small modules so that the program becomes easy to understand. The purpose of structured programming is to linearize control flow through a computer program so that the execution sequence follows the sequence in which the code is written. The dynamic structure of the program than resemble the static structure of the program. This enhances the readability, testability, and modifiability of the program. This linear flow of control can be managed by restricting the set of allowed applications construct to a single entry, single exit formats.

    Why we use Structured Programming?

    We use structured programming because it allows the programmer to understand the program easily. If a program consists of thousands of instructions and an error occurs then it is complicated to find that error in the whole program, but in structured programming, we can easily detect the error and then go to that location and correct it. This saves a lot of time.

    These are the following rules in structured programming:

    Structured Rule One: Code Block

    If the entry conditions are correct, but the exit conditions are wrong, the error must be in the block. This is not true if the execution is allowed to jump into a block. The error might be anywhere in the program. Debugging under these circumstances is much harder.

    Rule 1 of Structured Programming: A code block is structured, as shown in the figure. In flow-charting condition, a box with a single entry point and single exit point are structured. Structured programming is a method of making it evident that the program is correct.

    Structured Programming

    Structure Rule Two: Sequence

    A sequence of blocks is correct if the exit conditions of each block match the entry conditions of the following block. Execution enters each block at the block’s entry point and leaves through the block’s exit point. The whole series can be regarded as a single block, with an entry point and an exit point.

    Rule 2 of Structured Programming: Two or more code blocks in the sequence are structured, as shown in the figure.

    Structured Programming

    Structured Rule Three: Alternation

    If-then-else is frequently called alternation (because there are alternative options). In structured programming, each choice is a code block. If alternation is organized as in the flowchart at right, then there is one entry point (at the top) and one exit point (at the bottom). The structure should be coded so that if the entry conditions are fulfilled, then the exit conditions are satisfied (just like a code block).

    Rule 3 of Structured Programming: The alternation of two code blocks is structured, as shown in the figure.

    An example of an entry condition for an alternation method is: register $8 includes a signed integer. The exit condition may be: register $8 includes the absolute value of the signed number. The branch structure is used to fulfill the exit condition.

    Structured Programming

    Structured Rule 4: Iteration

    Iteration (while-loop) is organized as at right. It also has one entry point and one exit point. The entry point has conditions that must be satisfied, and the exit point has requirements that will be fulfilled. There are no jumps into the form from external points of the code.

    Rule 4 of Structured Programming: The iteration of a code block is structured, as shown in the figure.

    Structured Programming

    Structured Rule 5: Nested Structures

    In flowcharting conditions, any code block can be spread into any of the structures. If there is a portion of the flowchart that has a single entry point and a single exit point, it can be summarized as a single code block.

    Rule 5 of Structured Programming: A structure (of any size) that has a single entry point and a single exit point is equivalent to a code block. For example, we are designing a program to go through a list of signed integers calculating the absolute value of each one. We may (1) first regard the program as one block, then (2) sketch in the iteration required, and finally (3) put in the details of the loop body, as shown in the figure.

    Structured Programming

    The other control structures are the case, do-until, do-while, and for are not needed. However, they are sometimes convenient and are usually regarded as part of structured programming. In assembly language, they add little convenience.

  • Programming Style

    Programming style refers to the technique used in writing the source code for a computer program. Most programming styles are designed to help programmers quickly read and understands the program as well as avoid making errors. (Older programming styles also focused on conserving screen space.) A good coding style can overcome the many deficiencies of a first programming language, while poor style can defeat the intent of an excellent language.

    The goal of good programming style is to provide understandable, straightforward, elegant code. The programming style used in a various program may be derived from the coding standards or code conventions of a company or other computing organization, as well as the preferences of the actual programmer.

    Some general rules or guidelines in respect of programming style:

    Programming Style

    1. Clarity and simplicity of Expression: The programs should be designed in such a manner so that the objectives of the program is clear.

    2. Naming: In a program, you are required to name the module, processes, and variable, and so on. Care should be taken that the naming style should not be cryptic and non-representative.

          For Example: a = 3.14 * r * r
                                  area of circle = 3.14 * radius * radius;

    3. Control Constructs: It is desirable that as much as a possible single entry and single exit constructs used.

    4. Information hiding: The information secure in the data structures should be hidden from the rest of the system where possible. Information hiding can decrease the coupling between modules and make the system more maintainable.

    5. Nesting: Deep nesting of loops and conditions greatly harm the static and dynamic behavior of a program. It also becomes difficult to understand the program logic, so it is desirable to avoid deep nesting.

    6. User-defined types: Make heavy use of user-defined data types like enum, class, structure, and union. These data types make your program code easy to write and easy to understand.

    7. Module size: The module size should be uniform. The size of the module should not be too big or too small. If the module size is too large, it is not generally functionally cohesive. If the module size is too small, it leads to unnecessary overheads.

    8. Module Interface: A module with a complex interface should be carefully examined.

    9. Side-effects: When a module is invoked, it sometimes has a side effect of modifying the program state. Such side-effect should be avoided where as possible.

  • Coding in Software Engineering

    The coding is the process of transforming the design of a system into a computer language format. This coding phase of software development is concerned with software translating design specification into the source code. It is necessary to write source code & internal documentation so that conformance of the code to its specification can be easily verified.

    Coding is done by the coder or programmers who are independent people than the designer. The goal is not to reduce the effort and cost of the coding phase, but to cut to the cost of a later stage. The cost of testing and maintenance can be significantly reduced with efficient coding.

    Goals of Coding

    1. To translate the design of system into a computer language format: The coding is the process of transforming the design of a system into a computer language format, which can be executed by a computer and that perform tasks as specified by the design of operation during the design phase.
    2. To reduce the cost of later phases: The cost of testing and maintenance can be significantly reduced with efficient coding.
    3. Making the program more readable: Program should be easy to read and understand. It increases code understanding having readability and understandability as a clear objective of the coding activity can itself help in producing more maintainable software.

    For implementing our design into code, we require a high-level functional language. A programming language should have the following characteristics:

    Characteristics of Programming Language

    Following are the characteristics of Programming Language:

    Coding

    Readability: A good high-level language will allow programs to be written in some methods that resemble a quite-English description of the underlying functions. The coding may be done in an essentially self-documenting way.

    Portability: High-level languages, being virtually machine-independent, should be easy to develop portable software.

    Generality: Most high-level languages allow the writing of a vast collection of programs, thus relieving the programmer of the need to develop into an expert in many diverse languages.

    Brevity: Language should have the ability to implement the algorithm with less amount of code. Programs mean in high-level languages are often significantly shorter than their low-level equivalents.

    Error checking: A programmer is likely to make many errors in the development of a computer program. Many high-level languages invoke a lot of bugs checking both at compile-time and run-time.

    Cost: The ultimate cost of a programming language is a task of many of its characteristics.

    Quick translation: It should permit quick translation.

    Efficiency: It should authorize the creation of an efficient object code.

    Modularity: It is desirable that programs can be developed in the language as several separately compiled modules, with the appropriate structure for ensuring self-consistency among these modules.

    Widely available: Language should be widely available, and it should be feasible to provide translators for all the major machines and all the primary operating systems.

    A coding standard lists several rules to be followed during coding, such as the way variables are to be named, the way the code is to be laid out, error return conventions, etc.

    Coding Standards

    General coding standards refers to how the developer writes code, so here we will discuss some essential standards regardless of the programming language being used.

    The following are some representative coding standards:

    Coding
    1. Indentation: Proper and consistent indentation is essential in producing easy to read and maintainable programs.
      Indentation should be used to:
      • Emphasize the body of a control structure such as a loop or a select statement.
      • Emphasize the body of a conditional statement
      • Emphasize a new scope block
    2. Inline comments: Inline comments analyze the functioning of the subroutine, or key aspects of the algorithm shall be frequently used.
    3. Rules for limiting the use of global: These rules file what types of data can be declared global and what cannot.
    4. Structured Programming: Structured (or Modular) Programming methods shall be used. “GOTO” statements shall not be used as they lead to “spaghetti” code, which is hard to read and maintain, except as outlined line in the FORTRAN Standards and Guidelines.
    5. Naming conventions for global variables, local variables, and constant identifiers: A possible naming convention can be that global variable names always begin with a capital letter, local variable names are made of small letters, and constant names are always capital letters.
    6. Error return conventions and exception handling system: Different functions in a program report the way error conditions are handled should be standard within an organization. For example, different tasks while encountering an error condition should either return a 0 or 1 consistently.

    Coding Guidelines

    General coding guidelines provide the programmer with a set of the best methods which can be used to make programs more comfortable to read and maintain. Most of the examples use the C language syntax, but the guidelines can be tested to all languages.

    The following are some representative coding guidelines recommended by many software development organizations.

    Coding

    1. Line Length: It is considered a good practice to keep the length of source code lines at or below 80 characters. Lines longer than this may not be visible properly on some terminals and tools. Some printers will truncate lines longer than 80 columns.

    2. Spacing: The appropriate use of spaces within a line of code can improve readability.

    Example:

    Bad:        cost=price+(price*sales_tax)
                    fprintf(stdout ,”The total cost is %5.2f\n”,cost);

    Better:      cost = price + ( price * sales_tax )
                      fprintf (stdout,”The total cost is %5.2f\n”,cost);

    3. The code should be well-documented: As a rule of thumb, there must be at least one comment line on the average for every three-source line.

    4. The length of any function should not exceed 10 source lines: A very lengthy function is generally very difficult to understand as it possibly carries out many various functions. For the same reason, lengthy functions are possible to have a disproportionately larger number of bugs.

    5. Do not use goto statements: Use of goto statements makes a program unstructured and very tough to understand.

    6. Inline Comments: Inline comments promote readability.

    7. Error Messages: Error handling is an essential aspect of computer programming. This does not only include adding the necessary logic to test for and handle errors but also involves making error messages meaningful.

  • What is User Interface (UI) Design?

    The visual part of a computer application or operating system through which a client interacts with a computer or software. It determines how commands are given to the computer or the program and how data is displayed on the screen.

    Types of User Interface

    There are two main types of User Interface:

    • Text-Based User Interface or Command Line Interface
    • Graphical User Interface (GUI)

    Text-Based User Interface: This method relies primarily on the keyboard. A typical example of this is UNIX.

    Advantages

    • Many and easier to customizations options.
    • Typically capable of more important tasks.

    Disadvantages

    • Relies heavily on recall rather than recognition.
    • Navigation is often more difficult.

    Graphical User Interface (GUI): GUI relies much more heavily on the mouse. A typical example of this type of interface is any versions of the Windows operating systems.

    GUI Characteristics

    CharacteristicsDescriptions
    WindowsMultiple windows allow different information to be displayed simultaneously on the user’s screen.
    IconsIcons different types of information. On some systems, icons represent files. On other icons describes processes.
    MenusCommands are selected from a menu rather than typed in a command language.
    PointingA pointing device such as a mouse is used for selecting choices from a menu or indicating items of interests in a window.
    GraphicsGraphics elements can be mixed with text or the same display.

    Advantages

    • Less expert knowledge is required to use it.
    • Easier to Navigate and can look through folders quickly in a guess and check manner.
    • The user may switch quickly from one task to another and can interact with several different applications.

    Disadvantages

    • Typically decreased options.
    • Usually less customizable. Not easy to use one button for tons of different variations.

    UI Design Principles

    User Interface Design

    Structure: Design should organize the user interface purposefully, in the meaningful and usual based on precise, consistent models that are apparent and recognizable to users, putting related things together and separating unrelated things, differentiating dissimilar things and making similar things resemble one another. The structure principle is concerned with overall user interface architecture.

    Simplicity: The design should make the simple, common task easy, communicating clearly and directly in the user’s language, and providing good shortcuts that are meaningfully related to longer procedures.

    Visibility: The design should make all required options and materials for a given function visible without distracting the user with extraneous or redundant data.

    Feedback: The design should keep users informed of actions or interpretation, changes of state or condition, and bugs or exceptions that are relevant and of interest to the user through clear, concise, and unambiguous language familiar to users.

    Tolerance: The design should be flexible and tolerant, decreasing the cost of errors and misuse by allowing undoing and redoing while also preventing bugs wherever possible by tolerating varied inputs and sequences and by interpreting all reasonable actions.

  • Object-Oriented Design in Software Engineering

    In the object-oriented design method, the system is viewed as a collection of objects (i.e., entities). The state is distributed among the objects, and each object handles its state data. For example, in a Library Automation Software, each library representative may be a separate object with its data and functions to operate on these data. The tasks defined for one purpose cannot refer or change data of other objects. Objects have their internal data which represent their state. Similar objects create a class. In other words, each object is a member of some class. Classes may inherit features from the superclass.

    The different terms related to object design are:

    Object-Oriented Design
    1. Objects: All entities involved in the solution design are known as objects. For example, person, banks, company, and users are considered as objects. Every entity has some attributes associated with it and has some methods to perform on the attributes.
    2. Classes: A class is a generalized description of an object. An object is an instance of a class. A class defines all the attributes, which an object can have and methods, which represents the functionality of the object.
    3. Messages: Objects communicate by message passing. Messages consist of the integrity of the target object, the name of the requested operation, and any other action needed to perform the function. Messages are often implemented as procedure or function calls.
    4. Abstraction In object-oriented design, complexity is handled using abstraction. Abstraction is the removal of the irrelevant and the amplification of the essentials.
    5. Encapsulation: Encapsulation is also called an information hiding concept. The data and operations are linked to a single unit. Encapsulation not only bundles essential information of an object together but also restricts access to the data and methods from the outside world.
    6. Inheritance: OOD allows similar classes to stack up in a hierarchical manner where the lower or sub-classes can import, implement, and re-use allowed variables and functions from their immediate superclasses.This property of OOD is called an inheritance. This makes it easier to define a specific class and to create generalized classes from specific ones.
    7. Polymorphism: OOD languages provide a mechanism where methods performing similar tasks but vary in arguments, can be assigned the same name. This is known as polymorphism, which allows a single interface is performing functions for different types. Depending upon how the service is invoked, the respective portion of the code gets executed.
  • Function Oriented Design

    Function Oriented design is a method to software design where the model is decomposed into a set of interacting units or modules where each unit or module has a clearly defined function. Thus, the system is designed from a functional viewpoint.

    Design Notations

    Design Notations are primarily meant to be used during the process of design and are used to represent design or design decisions. For a function-oriented design, the design can be represented graphically or mathematically by the following:

    Function Oriented Design

    Data Flow Diagram

    Data-flow design is concerned with designing a series of functional transformations that convert system inputs into the required outputs. The design is described as data-flow diagrams. These diagrams show how data flows through a system and how the output is derived from the input through a series of functional transformations.

    Data-flow diagrams are a useful and intuitive way of describing a system. They are generally understandable without specialized training, notably if control information is excluded. They show end-to-end processing. That is the flow of processing from when data enters the system to where it leaves the system can be traced.

    Data-flow design is an integral part of several design methods, and most CASE tools support data-flow diagram creation. Different ways may use different icons to represent data-flow diagram entities, but their meanings are similar.

    The notation which is used is based on the following symbols:

    Function Oriented Design
    Function Oriented Design

    The report generator produces a report which describes all of the named entities in a data-flow diagram. The user inputs the name of the design represented by the diagram. The report generator then finds all the names used in the data-flow diagram. It looks up a data dictionary and retrieves information about each name. This is then collated into a report which is output by the system.

    Data Dictionaries

    A data dictionary lists all data elements appearing in the DFD model of a system. The data items listed contain all data flows and the contents of all data stores looking on the DFDs in the DFD model of a system.

    A data dictionary lists the objective of all data items and the definition of all composite data elements in terms of their component data items. For example, a data dictionary entry may contain that the data grossPay consists of the parts regularPay and overtimePay.

                      grossPay = regularPay + overtimePay

    For the smallest units of data elements, the data dictionary lists their name and their type.

    A data dictionary plays a significant role in any software development process because of the following reasons:

    • A Data dictionary provides a standard language for all relevant information for use by engineers working in a project. A consistent vocabulary for data items is essential since, in large projects, different engineers of the project tend to use different terms to refer to the same data, which unnecessarily causes confusion.
    • The data dictionary provides the analyst with a means to determine the definition of various data structures in terms of their component elements.

    Structured Charts

    It partitions a system into block boxes. A Black box system that functionality is known to the user without the knowledge of internal design.

    Function Oriented Design

    Structured Chart is a graphical representation which shows:

    • System partitions into modules
    • Hierarchy of component modules
    • The relation between processing modules
    • Interaction between modules
    • Information passed between modules

    The following notations are used in structured chart:

    Function Oriented Design

    Pseudo-code

    Pseudo-code notations can be used in both the preliminary and detailed design phases. Using pseudo-code, the designer describes system characteristics using short, concise, English Language phases that are structured by keywords such as If-Then-Else, While-Do, and End.

  • Cohesion and Coupling in Software Engineering

    Introduction:

    The design and evaluation of software systems are heavily reliant on the concepts of coherence and coupling. They describe the arrangement and communication between the modules or constituents of a software system. Building software applications that are resilient, scalable, and maintainable requires an understanding of cohesion and coupling.

    Introduction to Cohesion and Coupling:

    The art of designing manageable and effective software components known as modularization is shaped by coupling and cohesion in software engineering.

    Module interdependence is defined by coupling whereas component unity is measured by cohesion. Achieving high cohesion and low coupling encourages modular structures that are understandable and maintainable. By navigating complexity, developers can enhance testing, scalability, and teamwork through this mutually beneficial relationship. These guidelines affect customer satisfaction and project management throughout the whole software lifecycle.

    Module Coupling

    In software engineering, the coupling is the degree of interdependence between software modules. Two modules that are tightly coupled are strongly dependent on each other. However, two modules that are loosely coupled are not dependent on each other. Uncoupled modules have no interdependence at all within them.

    The various types of coupling techniques are shown in fig:

    Coupling and Cohesion

    A good design is the one that has low coupling. Coupling is measured by the number of relations between the modules. That is, the coupling increases as the number of calls between modules increase or the amount of shared data is large. Thus, it can be said that a design with high coupling will have more errors.

    Types of Module Coupling

    Coupling and Cohesion

    1. No Direct Coupling: There is no direct coupling between M1 and M2.

    Coupling and Cohesion

    In this case, modules are subordinates to different modules. Therefore, no direct coupling.

    2. Data Coupling: When data of one module is passed to another module, this is called data coupling.

    Coupling and Cohesion

    3. Stamp Coupling: Two modules are stamp coupled if they communicate using composite data items such as structure, objects, etc. When the module passes non-global data structure or entire structure to another module, they are said to be stamp coupled. For example, passing structure variable in C or object in C++ language to a module.

    4. Control Coupling: Control Coupling exists among two modules if data from one module is used to direct the structure of instruction execution in another.

    5. External Coupling: External Coupling arises when two modules share an externally imposed data format, communication protocols, or device interface. This is related to communication to external tools and devices.

    6. Common Coupling: Two modules are common coupled if they share information through some global data items.

    Coupling and Cohesion

    7. Content Coupling: Content Coupling exists among two modules if they share code, e.g., a branch from one module into another module.

    Module Cohesion

    In computer programming, cohesion defines to the degree to which the elements of a module belong together. Thus, cohesion measures the strength of relationships between pieces of functionality within a given module. For example, in highly cohesive systems, functionality is strongly related.

    Cohesion is an ordinal type of measurement and is generally described as “high cohesion” or “low cohesion.”

    Coupling and Cohesion

    Types of Modules Cohesion

    Coupling and Cohesion
    1. Functional Cohesion: Functional Cohesion is said to exist if the different elements of a module, cooperate to achieve a single function.
    2. Sequential Cohesion: A module is said to possess sequential cohesion if the element of a module form the components of the sequence, where the output from one component of the sequence is input to the next.
    3. Communicational Cohesion: A module is said to have communicational cohesion, if all tasks of the module refer to or update the same data structure, e.g., the set of functions defined on an array or a stack.
    4. Procedural Cohesion: A module is said to be procedural cohesion if the set of purpose of the module are all parts of a procedure in which particular sequence of steps has to be carried out for achieving a goal, e.g., the algorithm for decoding a message.
    5. Temporal Cohesion: When a module includes functions that are associated by the fact that all the methods must be executed in the same time, the module is said to exhibit temporal cohesion.
    6. Logical Cohesion: A module is said to be logically cohesive if all the elements of the module perform a similar operation. For example Error handling, data input and data output, etc.
    7. Coincidental Cohesion: A module is said to have coincidental cohesion if it performs a set of tasks that are associated with each other very loosely, if at all.

    Difference between Cohesion & Coupling:

    Coupling and Cohesion
    AspectCouplingCohesion
    DefinitionLevel of interdependence among a systems modules or constituent parts.The degree of focus and relatedness among a module or component.
    FocusInteraction amongst modules.Elements that make up a module.
    Impact on ChangeOne module change may have an effect on others.Modifications are contained within a module.
    FlexibilitySince changes are likely to spread a high coupling, decreases system flexibility.As changes are localized high cohesion increases system flexibility.
    MaintenanceDue to the frequent changes high coupling makes maintenance more difficult.AS changes are limited maintenance is made easier by high cohesion.
    TestingIsolating and testing coupled modules is more difficult.Cohesive modules functionality is well-contained testing them is simpler.
    ReuseBecause of dependencies coupled modules are less reusable.Cohesive modules clear and targeted functionality makes them more reusable.
    DependencyModule dependency is represented by coupling.Cohesion stands for the purpose and unity of a module.
    Design GoalTo reduce interdependencies, aim for low coupling.For modules to be clear and focused strive for high cohesion.
    ObjectiveFor system stability minimize dependencies and interactions.elemental groupings to accomplish a clear goal.
    System ImpactCascading failures and rigid architectures can result from high coupling.Adaptable architectures and maintainability are encouraged by high cohesion.

    Conclusion:

    The quality and maintainability of software systems are greatly impacted by the fundamental software engineering concepts of cohesion and coupling. Strong module cohesion guarantees focused, unambiguous functionality which facilitates the understanding testing and maintenance of code.

    Aiming for low coupling and high cohesion together results in systems that are more adaptable, resilient, and changeable. A well-designed software system achieves maintainability, reusability, and long-term success by striking a harmonious balance between coupling and cohesion. Software engineers can create systems that are not only functional but also flexible enough to adapt to changing user demands and technological breakthroughs by comprehending and putting these principles into practice.

  • Software Design Principles

    Software design principles are concerned with providing means to handle the complexity of the design process effectively. Effectively managing the complexity will not only reduce the effort needed for design but can also reduce the scope of introducing errors during design.

    Following are the principles of Software Design

    Software Design Principles

    Problem Partitioning

    For small problem, we can handle the entire problem at once but for the significant problem, divide the problems and conquer the problem it means to divide the problem into smaller pieces so that each piece can be captured separately.

    For software design, the goal is to divide the problem into manageable pieces.

    Benefits of Problem Partitioning

    1. Software is easy to understand
    2. Software becomes simple
    3. Software is easy to test
    4. Software is easy to modify
    5. Software is easy to maintain
    6. Software is easy to expand

    These pieces cannot be entirely independent of each other as they together form the system. They have to cooperate and communicate to solve the problem. This communication adds complexity.

    Note: As the number of partition increases = Cost of partition and complexity increases


    Abstraction

    An abstraction is a tool that enables a designer to consider a component at an abstract level without bothering about the internal details of the implementation. Abstraction can be used for existing element as well as the component being designed.

    Here, there are two common abstraction mechanisms

    1. Functional Abstraction
    2. Data Abstraction

    Functional Abstraction

    1. A module is specified by the method it performs.
    2. The details of the algorithm to accomplish the functions are not visible to the user of the function.

    Functional abstraction forms the basis for Function oriented design approaches.

    Data Abstraction

    Details of the data elements are not visible to the users of data. Data Abstraction forms the basis for Object Oriented design approaches.


    Modularity

    Modularity specifies to the division of software into separate modules which are differently named and addressed and are integrated later on in to obtain the completely functional software. It is the only property that allows a program to be intellectually manageable. Single large programs are difficult to understand and read due to a large number of reference variables, control paths, global variables, etc.

    The desirable properties of a modular system are:

    • Each module is a well-defined system that can be used with other applications.
    • Each module has single specified objectives.
    • Modules can be separately compiled and saved in the library.
    • Modules should be easier to use than to build.
    • Modules are simpler from outside than inside.

    Advantages and Disadvantages of Modularity

    In this topic, we will discuss various advantage and disadvantage of Modularity.

    Software Design Principles

    Advantages of Modularity

    There are several advantages of Modularity

    • It allows large programs to be written by several or different people
    • It encourages the creation of commonly used routines to be placed in the library and used by other programs.
    • It simplifies the overlay procedure of loading a large program into main storage.
    • It provides more checkpoints to measure progress.
    • It provides a framework for complete testing, more accessible to test
    • It produced the well designed and more readable program.

    Disadvantages of Modularity

    There are several disadvantages of Modularity

    • Execution time maybe, but not certainly, longer
    • Storage size perhaps, but is not certainly, increased
    • Compilation and loading time may be longer
    • Inter-module communication problems may be increased
    • More linkage required, run-time may be longer, more source lines must be written, and more documentation has to be done

    Modular Design

    Modular design reduces the design complexity and results in easier and faster implementation by allowing parallel development of various parts of a system. We discuss a different section of modular design in detail in this section:

    1. Functional Independence: Functional independence is achieved by developing functions that perform only one kind of task and do not excessively interact with other modules. Independence is important because it makes implementation more accessible and faster. The independent modules are easier to maintain, test, and reduce error propagation and can be reused in other programs as well. Thus, functional independence is a good design feature which ensures software quality.

    It is measured using two criteria:

    • Cohesion: It measures the relative function strength of a module.
    • Coupling: It measures the relative interdependence among modules.

    2. Information hiding: The fundamental of Information hiding suggests that modules can be characterized by the design decisions that protect from the others, i.e., In other words, modules should be specified that data include within a module is inaccessible to other modules that do not need for such information.

    The use of information hiding as design criteria for modular system provides the most significant benefits when modifications are required during testing’s and later during software maintenance. This is because as most data and procedures are hidden from other parts of the software, inadvertent errors introduced during modifications are less likely to propagate to different locations within the software.


    Strategy of Design

    A good system design strategy is to organize the program modules in such a method that are easy to develop and latter too, change. Structured design methods help developers to deal with the size and complexity of programs. Analysts generate instructions for the developers about how code should be composed and how pieces of code should fit together to form a program.

    To design a system, there are two possible approaches:

    1. Top-down Approach
    2. Bottom-up Approach

    1. Top-down Approach: This approach starts with the identification of the main components and then decomposing them into their more detailed sub-components.

    Software Design Principles

    2. Bottom-up Approach: A bottom-up approach begins with the lower details and moves towards up the hierarchy, as shown in fig. This approach is suitable in case of an existing system.

    Software Design Principles