Author: saqibkhan

  • Software Fault Tolerance

    Software fault tolerance is the ability for software to detect and recover from a fault that is happening or has already happened in either the software or hardware in the system in which the software is running to provide service by the specification.

    Software fault tolerance is a necessary component to construct the next generation of highly available and reliable computing systems from embedded systems to data warehouse systems.

    To adequately understand software fault tolerance it is important to understand the nature of the problem that software fault tolerance is supposed to solve.

    Software faults are all design faults. Software manufacturing, the reproduction of software, is considered to be perfect. The source of the problem being solely designed faults is very different than almost any other system in which fault tolerance is the desired property.

    Software Fault Tolerance Techniques

    Software Fault Tolerance

    1. Recovery Block

    The recovery block method is a simple technique developed by Randel. The recovery block operates with an adjudicator, which confirms the results of various implementations of the same algorithm. In a system with recovery blocks, the system view is broken down into fault recoverable blocks.

    The entire system is constructed of these fault-tolerant blocks. Each block contains at least a primary, secondary, and exceptional case code along with an adjudicator. The adjudicator is the component, which determines the correctness of the various blocks to try.

    The adjudicator should be kept somewhat simple to maintain execution speed and aide in correctness. Upon first entering a unit, the adjudicator first executes the primary alternate. (There may be N alternates in a unit which the adjudicator may try.) If the adjudicator determines that the fundamental block failed, it then tries to roll back the state of the system and tries the secondary alternate.

    If the adjudicator does not accept the results of any of the alternates, it then invokes the exception handler, which then indicates the fact that the software could not perform the requested operation.

    The recovery block technique increases the pressure on the specification to be specific enough to create various multiple alternatives that are functionally the same. This problem is further discussed in the context of the N-version software method.

    2. N-Version Software

    The N-version software methods attempt to parallel the traditional hardware fault tolerance concept of N-way redundant hardware. In an N-version software system, every module is done with up to N different methods. Each variant accomplishes the same function, but hopefully in a various way. Each version then submits its answer to voter or decider, which decides the correct answer, and returns that as the result of the module.

    This system can hopefully overcome the design faults present in most software by relying upon the design diversity concept. An essential distinction in N-version software is the fact that the system could include multiple types of hardware using numerous versions of the software.

    N-version software can only be successful and successfully tolerate faults if the required design diversity is met. The dependence on appropriate specifications in N-version software, (and recovery blocks,) cannot be stressed enough.

    3. N-Version Software and Recovery Blocks

    The differences between the recovery block technique and the N-version technique are not too numerous, but they are essential. In traditional recovery blocks, each alternative would be executed serially until an acceptable solution is found as determined by the adjudicator. The recovery block method has been extended to contain concurrent execution of the various alternatives.

    The N-version techniques have always been designed to be implemented using N-way hardware concurrently. In a serial retry system, the cost in time of trying multiple methods may be too expensive, especially for a real-time system. Conversely, concurrent systems need the expense of N-way hardware and a communications network to connect them.

    The recovery block technique requires that each module build a specific adjudicator; in the N-version method, a single decider may be used. The recovery block technique, assuming that the programmer can create a sufficiently simple adjudicator, will create a system, which is challenging to enter into an incorrect state.


  • Reliability Metrics

    Reliability metrics are used to quantitatively expressed the reliability of the software product. The option of which metric is to be used depends upon the type of system to which it applies & the requirements of the application domain.

    Some reliability metrics which can be used to quantify the reliability of the software product are as follows:

    Reliability Metrics

    1. Mean Time to Failure (MTTF)

    MTTF is described as the time interval between the two successive failures. An MTTF of 200 mean that one failure can be expected each 200-time units. The time units are entirely dependent on the system & it can even be stated in the number of transactions. MTTF is consistent for systems with large transactions.

    For example, It is suitable for computer-aided design systems where a designer will work on a design for several hours as well as for Word-processor systems.

    To measure MTTF, we can evidence the failure data for n failures. Let the failures appear at the time instants t1,t2…..tn.

    MTTF can be calculated as

    Reliability Metrics

    2. Mean Time to Repair (MTTR)

    Once failure occurs, some-time is required to fix the error. MTTR measures the average time it takes to track the errors causing the failure and to fix them.

    3. Mean Time Between Failure (MTBR)

    We can merge MTTF & MTTR metrics to get the MTBF metric.

                      MTBF = MTTF + MTTR

    Thus, an MTBF of 300 denoted that once the failure appears, the next failure is expected to appear only after 300 hours. In this method, the time measurements are real-time & not the execution time as in MTTF.

    4. Rate of occurrence of failure (ROCOF)

    It is the number of failures appearing in a unit time interval. The number of unexpected events over a specific time of operation. ROCOF is the frequency of occurrence with which unexpected role is likely to appear. A ROCOF of 0.02 mean that two failures are likely to occur in each 100 operational time unit steps. It is also called the failure intensity metric.

    5. Probability of Failure on Demand (POFOD)

    POFOD is described as the probability that the system will fail when a service is requested. It is the number of system deficiency given several systems inputs.

    POFOD is the possibility that the system will fail when a service request is made.

    POFOD of 0.1 means that one out of ten service requests may fail.POFOD is an essential measure for safety-critical systems. POFOD is relevant for protection systems where services are demanded occasionally.

    6. Availability (AVAIL)

    Availability is the probability that the system is applicable for use at a given time. It takes into account the repair time & the restart time for the system. An availability of 0.995 means that in every 1000 time units, the system is feasible to be available for 995 of these. The percentage of time that a system is applicable for use, taking into account planned and unplanned downtime. If a system is down an average of four hours out of 100 hours of operation, its AVAIL is 96%.

    Software Metrics for Reliability

    The Metrics are used to improve the reliability of the system by identifying the areas of requirements.

    Different Types of Software Metrics are:-

    Reliability Metrics

    Requirements Reliability Metrics

    Requirements denote what features the software must include. It specifies the functionality that must be contained in the software. The requirements must be written such that is no misconception between the developer & the client. The requirements must include valid structure to avoid the loss of valuable data.

    The requirements should be thorough and in a detailed manner so that it is simple for the design stage. The requirements should not include inadequate data. Requirement Reliability metrics calculates the above-said quality factors of the required document.

    Design and Code Reliability Metrics

    The quality methods that exists in design and coding plan are complexity, size, and modularity. Complex modules are tough to understand & there is a high probability of occurring bugs. The reliability will reduce if modules have a combination of high complexity and large size or high complexity and small size. These metrics are also available to object-oriented code, but in this, additional metrics are required to evaluate the quality.

    Testing Reliability Metrics

    These metrics use two methods to calculate reliability.

    First, it provides that the system is equipped with the tasks that are specified in the requirements. Because of this, the bugs due to the lack of functionality reduces.

    The second method is calculating the code, finding the bugs & fixing them. To ensure that the system includes the functionality specified, test plans are written that include multiple test cases. Each test method is based on one system state and tests some tasks that are based on an associated set of requirements. The goals of an effective verification program is to ensure that each elements is tested, the implication being that if the system passes the test, the requirement?s functionality is contained in the delivered system.

  • Software Reliability Measurement Techniques

    Reliability metrics are used to quantitatively expressed the reliability of the software product. The option of which parameter is to be used depends upon the type of system to which it applies & the requirements of the application domain.

    Measuring software reliability is a severe problem because we don’t have a good understanding of the nature of software. It is difficult to find a suitable method to measure software reliability and most of the aspects connected to software reliability. Even the software estimates have no uniform definition. If we cannot measure the reliability directly, something can be measured that reflects the features related to reliability.

    The current methods of software reliability measurement can be divided into four categories:

    Software Reliability Measurement Techniques

    1. Product Metrics

    Product metrics are those which are used to build the artifacts, i.e., requirement specification documents, system design documents, etc. These metrics help in the assessment if the product is right sufficient through records on attributes like usability, reliability, maintainability & portability. In these measurements are taken from the actual body of the source code.

    1. Software size is thought to be reflective of complexity, development effort, and reliability. Lines of Code (LOC), or LOC in thousands (KLOC), is an initial intuitive approach to measuring software size. The basis of LOC is that program length can be used as a predictor of program characteristics such as effort &ease of maintenance. It is a measure of the functional complexity of the program and is independent of the programming language.
    2. Function point metric is a technique to measure the functionality of proposed software development based on the count of inputs, outputs, master files, inquires, and interfaces.
    3. Test coverage metric size fault and reliability by performing tests on software products, assuming that software reliability is a function of the portion of software that is successfully verified or tested.
    4. Complexity is directly linked to software reliability, so representing complexity is essential. Complexity-oriented metrics is a way of determining the complexity of a program’s control structure by simplifying the code into a graphical representation. The representative metric is McCabe’s Complexity Metric.
    5. Quality metrics measure the quality at various steps of software product development. An vital quality metric is Defect Removal Efficiency (DRE). DRE provides a measure of quality because of different quality assurance and control activities applied throughout the development process.

    2. Project Management Metrics

    Project metrics define project characteristics and execution. If there is proper management of the project by the programmer, then this helps us to achieve better products. A relationship exists between the development process and the ability to complete projects on time and within the desired quality objectives. Cost increase when developers use inadequate methods. Higher reliability can be achieved by using a better development process, risk management process, configuration management process.

    These metrics are:

    • Number of software developers
    • Staffing pattern over the life-cycle of the software
    • Cost and schedule
    • Productivity

    3. Process Metrics

    Process metrics quantify useful attributes of the software development process & its environment. They tell if the process is functioning optimally as they report on characteristics like cycle time & rework time. The goal of process metric is to do the right job on the first time through the process. The quality of the product is a direct function of the process. So process metrics can be used to estimate, monitor, and improve the reliability and quality of software. Process metrics describe the effectiveness and quality of the processes that produce the software product.

    Examples are:

    • The effort required in the process
    • Time to produce the product
    • Effectiveness of defect removal during development
    • Number of defects found during testing
    • Maturity of the process

    4. Fault and Failure Metrics

    A fault is a defect in a program which appears when the programmer makes an error and causes failure when executed under particular conditions. These metrics are used to determine the failure-free execution software.

    To achieve this objective, a number of faults found during testing and the failures or other problems which are reported by the user after delivery are collected, summarized, and analyzed. Failure metrics are based upon customer information regarding faults found after release of the software. The failure data collected is therefore used to calculate failure density, Mean Time between Failures (MTBF), or other parameters to measure or predict software reliability.

  • Software Failure Mechanisms

    The software failure can be classified as:

    Transient failure: These failures only occur with specific inputs.

    Permanent failure: This failure appears on all inputs.

    Recoverable failure: System can recover without operator help.

    Unrecoverable failure: System can recover with operator help only.

    Non-corruption failure: Failure does not corrupt system state or data.

    Corrupting failure: It damages the system state or data.

    Software failures may be due to bugs, ambiguities, oversights or misinterpretation of the specification that the software is supposed to satisfy, carelessness or incompetence in writing code, inadequate testing, incorrect or unexpected usage of the software or other unforeseen problems.

    Hardware vs. Software Reliability

    Hardware ReliabilitySoftware Reliability
    Hardware faults are mostly physical faults.Software faults are design faults, which are tough to visualize, classify, detect, and correct.
    Hardware components generally fail due to wear and tear.Software component fails due to bugs.
    In hardware, design faults may also exist, but physical faults generally dominate.In software, we can simply find a strict corresponding counterpart for “manufacturing” as the hardware manufacturing process, if the simple action of uploading software modules into place does not count. Therefore, the quality of the software will not change once it is uploaded into the storage and start running
    Hardware exhibits the failure features shown in the following figure:
    Software Failure Mechanisms
    It is called the bathtub curve. Period A, B, and C stand for burn-in phase, useful life phase, and end-of-life phase respectively.
    Software reliability does not show the same features similar as hardware. A possible curve is shown in the following figure:
    Software Failure Mechanisms
    If we projected software reliability on the same axes.

    There are two significant differences between hardware and software curves are:

    One difference is that in the last stage, the software does not have an increasing failure rate as hardware does. In this phase, the software is approaching obsolescence; there are no motivations for any upgrades or changes to the software. Therefore, the failure rate will not change.

    The second difference is that in the useful-life phase, the software will experience a radical increase in failure rate each time an upgrade is made. The failure rate levels off gradually, partly because of the defects create and fixed after the updates.

    The upgrades in above figure signify feature upgrades, not upgrades for reliability. For feature upgrades, the complexity of software is possible to be increased, since the functionality of the software is enhanced. Even error fixes may be a reason for more software failures if the bug fix induces other defects into the software. For reliability upgrades, it is likely to incur a drop in software failure rate, if the objective of the upgrade is enhancing software reliability, such as a redesign or reimplementation of some modules using better engineering approaches, such as clean-room method.

    A partial list of the distinct features of software compared to hardware is listed below:

    Software Failure Mechanisms

    Failure cause: Software defects are primarily designed defects.

    Wear-out: Software does not have an energy-related wear-out phase. Bugs can arise without warning.

    Repairable system: Periodic restarts can help fix software queries.

    Time dependency and life cycle: Software reliability is not a purpose of operational time.

    Environmental factors: Do not affect Software reliability, except it may affect program inputs.

    Reliability prediction: Software reliability cannot be predicted from any physical basis since it depends entirely on human factors in design.

    Redundancy: It cannot improve Software reliability if identical software elements are used.

    Interfaces: Software interfaces are merely conceptual other than visual.

    Failure rate motivators: It is generally not predictable from analyses of separate statements.

    Built with standard components: Well-understood and extensively tested standard element will help improve maintainability and reliability. But in the software industry, we have not observed this trend. Code reuse has been around for some time but to a minimal extent. There are no standard elements for software, except for some standardized logic structures.

  • Software Reliability in Software Engineering

    Introduction

    Software Reliability means Operational reliability. It is described as the ability of a system or component to perform its required functions under static conditions for a specific period.

    Software reliability is also defined as the probability that a software system fulfills its assigned task in a given environment for a predefined number of input cases, assuming that the hardware and the input are free of error.

    Software Reliability is an essential connect of software quality, composed with functionality, usability, performance, serviceability, capability, installability, maintainability, and documentation. Software Reliability is hard to achieve because the complexity of software turn to be high. While any system with a high degree of complexity, containing software, will be hard to reach a certain level of reliability, system developers tend to push complexity into the software layer, with the speedy growth of system size and ease of doing so by upgrading the software.

    For example, large next-generation aircraft will have over 1 million source lines of software on-board; next-generation air traffic control systems will contain between one and two million lines; the upcoming International Space Station will have over two million lines on-board and over 10 million lines of ground support software; several significant life-critical defense systems will have over 5 million source lines of software. While the complexity of software is inversely associated with software reliability, it is directly related to other vital factors in software quality, especially functionality, capability, etc.

    Techniques of Software Reliability

    Two distinct Models including are used to calculate software reliability:

    1. Prediction Modeling
    2. Estimation Modeling

    Prediction Modeling

    As the name suggests, the Prediction Model is constructed using presumptions about the specifications needed to create the specified software program. Among these assumptions are the information and materials from historical occurrences or the software’s operational features. Because it is thought to be extremely unreliable to predict during or after development, it is carried out during the design phase or before the development process begins. Forecasts are not based on the current situation but rather on the idea that the application will be used at some point in the future.

    Estimation Modeling

    The estimation model is constructed using the current data results from the development or testing processes and is based on several software features. It is completed later in the software development life cycle, when all of the required software components have been installed. The software’s reliability is estimated using the current or immediately subsequent time periods. Several software development analysts have also developed different models such as the Basic Execution Time Model Shooman Model, the Bug Seeding Model Logarithmic Poisson Time Model, the Littlewood – Verrall Model, the Goel – Okumoto Model, the Musa – Okumoto Model, and the Jelinski – Moranda Model.

    Metrics for software Reliability

    The software system applications’ reliability is measured and derived using software reliability metrics, which can be expressed numerically or in any other way. The system behaviour, the software’s business objective, the anticipated recovery time, the likelihood of failure, the kinds of users who use the program, etc., can all influence the kind of metric that the application developers decide on. The following are the kinds of assessments that professionals in software application development frequently use in real time to gauge software reliability.

    Based on Requirements

    The client’s actual needs can be discovered in the software development specification documentation. It usually outlines the requirements and expectations for developing the software, including its functional features, non-functional look, and dependencies on other related systems. It is employed to identify the functionality of the software.

    It is used to address non-functional aspects such as the software’s look compatibility performance validation integrating capabilities load passed through the program in real-time, etc. The process outcome should demonstrate that there are no differences between the client’s needs and the software development team’s comprehension.

    Based on Design and Code

    The action plan assesses software reliability during the design and coding phases. The software component usability features and software size are the domains where the estimation is used. Maintaining the system in smaller units is crucial in order to significantly lower the likelihood of accidents. The reliability scale will operate as needed for the analysis once the fault occurrences are contained. Multiple components with easily comprehensible software units are preferable to a single large complex system.

    Testing Reliability Metrics

    During the testing process, the dependability metrics are divided into two pieces. One is validation to ensure that the functional behaviour of the built application matches the requirements specified in the documentation. The other portion evaluates the program’s functions and performance. The first is referred to as a Black Box Testing Method, and the latter is known as White Box Testing, which is usually performed by the developer.

    Under the guise of the client’s requirement specifications, the testing procedure is conducted against the previously placed documentation. This means that any discrepancy at this point will be reported fixed as part of the bug fix and monitored using a defect life cycle. In order to ensure that every aspect of the developed system is validated, it is used to accomplish an efficient method of validating the entire system.

    The following are the approaches employed, based on the required type of metric analysis, during the above-mentioned software development phases:

    • Mean Time to Failure – (Total time) / (Number of units tested)
    • Mean Time to Repair – (Total time for maintenance) / (total repairs)
    • Mean Time Between Failure – MTTF + MTTR
    • Rate of Occurrence of Failure – 1 / (MTTF)
    • Probability of Failure – (Number of Failures) / (Total cases considered)
    • Availability – MTTF / MTBF

    Example to Implement Software Reliability:

    Let us consider the Mean Time to Failure computation, which requires both the total time and the number of units tested.

    For example, if the values are as follows, the MTTF is calculated as:

    MTTF = (total time) / (number of units tested)

    = 100 / 40

    = 2.5

    Factors Affecting Software Reliability

    A user’s assessment of a software program’s dependability is based on two types of data.

    • The quantity of errors in the software
    • The way users interact with the system. This is referred to as the operational profile.

    The following factors affect the number of faults in a system

    • The code’s size and complexity.
    • Features of the employed development process.
    • Training education and experience of development staff.
    • Operating environment.

    Software Reliability Applications

    There are several uses for software reliability:

    1. Technologies related to software engineering are compared.
      • How much does it cost to adopt a technology?
      • In terms of price and quality what is the technologies yield?
    2. Monitoring the system testing process: The failure intensity metric provides information about the systems current quality a high intensity indicates that additional testing is necessary.
    3. Controlling the system in use: The degree of software modification required for maintenance has an impact on the systems dependability.
    4. Improved understanding of software development processes: We can gain a better understanding of software development processes by quantifying quality.

    Software Reliability Benefits

    Including software reliability in the software development process has the following benefits:

    • Data preservation makes use of software reliability.
    • Avoiding software failure is beneficial.
    • The process of upgrading the system is simple.
    • More productivity is the result of improved system performance and efficiency.
  • Structured Programming

    In structured programming, we sub-divide the whole program into small modules so that the program becomes easy to understand. The purpose of structured programming is to linearize control flow through a computer program so that the execution sequence follows the sequence in which the code is written. The dynamic structure of the program than resemble the static structure of the program. This enhances the readability, testability, and modifiability of the program. This linear flow of control can be managed by restricting the set of allowed applications construct to a single entry, single exit formats.

    Why we use Structured Programming?

    We use structured programming because it allows the programmer to understand the program easily. If a program consists of thousands of instructions and an error occurs then it is complicated to find that error in the whole program, but in structured programming, we can easily detect the error and then go to that location and correct it. This saves a lot of time.

    These are the following rules in structured programming:

    Structured Rule One: Code Block

    If the entry conditions are correct, but the exit conditions are wrong, the error must be in the block. This is not true if the execution is allowed to jump into a block. The error might be anywhere in the program. Debugging under these circumstances is much harder.

    Rule 1 of Structured Programming: A code block is structured, as shown in the figure. In flow-charting condition, a box with a single entry point and single exit point are structured. Structured programming is a method of making it evident that the program is correct.

    Structured Programming

    Structure Rule Two: Sequence

    A sequence of blocks is correct if the exit conditions of each block match the entry conditions of the following block. Execution enters each block at the block’s entry point and leaves through the block’s exit point. The whole series can be regarded as a single block, with an entry point and an exit point.

    Rule 2 of Structured Programming: Two or more code blocks in the sequence are structured, as shown in the figure.

    Structured Programming

    Structured Rule Three: Alternation

    If-then-else is frequently called alternation (because there are alternative options). In structured programming, each choice is a code block. If alternation is organized as in the flowchart at right, then there is one entry point (at the top) and one exit point (at the bottom). The structure should be coded so that if the entry conditions are fulfilled, then the exit conditions are satisfied (just like a code block).

    Rule 3 of Structured Programming: The alternation of two code blocks is structured, as shown in the figure.

    An example of an entry condition for an alternation method is: register $8 includes a signed integer. The exit condition may be: register $8 includes the absolute value of the signed number. The branch structure is used to fulfill the exit condition.

    Structured Programming

    Structured Rule 4: Iteration

    Iteration (while-loop) is organized as at right. It also has one entry point and one exit point. The entry point has conditions that must be satisfied, and the exit point has requirements that will be fulfilled. There are no jumps into the form from external points of the code.

    Rule 4 of Structured Programming: The iteration of a code block is structured, as shown in the figure.

    Structured Programming

    Structured Rule 5: Nested Structures

    In flowcharting conditions, any code block can be spread into any of the structures. If there is a portion of the flowchart that has a single entry point and a single exit point, it can be summarized as a single code block.

    Rule 5 of Structured Programming: A structure (of any size) that has a single entry point and a single exit point is equivalent to a code block. For example, we are designing a program to go through a list of signed integers calculating the absolute value of each one. We may (1) first regard the program as one block, then (2) sketch in the iteration required, and finally (3) put in the details of the loop body, as shown in the figure.

    Structured Programming

    The other control structures are the case, do-until, do-while, and for are not needed. However, they are sometimes convenient and are usually regarded as part of structured programming. In assembly language, they add little convenience.

  • Programming Style

    Programming style refers to the technique used in writing the source code for a computer program. Most programming styles are designed to help programmers quickly read and understands the program as well as avoid making errors. (Older programming styles also focused on conserving screen space.) A good coding style can overcome the many deficiencies of a first programming language, while poor style can defeat the intent of an excellent language.

    The goal of good programming style is to provide understandable, straightforward, elegant code. The programming style used in a various program may be derived from the coding standards or code conventions of a company or other computing organization, as well as the preferences of the actual programmer.

    Some general rules or guidelines in respect of programming style:

    Programming Style

    1. Clarity and simplicity of Expression: The programs should be designed in such a manner so that the objectives of the program is clear.

    2. Naming: In a program, you are required to name the module, processes, and variable, and so on. Care should be taken that the naming style should not be cryptic and non-representative.

          For Example: a = 3.14 * r * r
                                  area of circle = 3.14 * radius * radius;

    3. Control Constructs: It is desirable that as much as a possible single entry and single exit constructs used.

    4. Information hiding: The information secure in the data structures should be hidden from the rest of the system where possible. Information hiding can decrease the coupling between modules and make the system more maintainable.

    5. Nesting: Deep nesting of loops and conditions greatly harm the static and dynamic behavior of a program. It also becomes difficult to understand the program logic, so it is desirable to avoid deep nesting.

    6. User-defined types: Make heavy use of user-defined data types like enum, class, structure, and union. These data types make your program code easy to write and easy to understand.

    7. Module size: The module size should be uniform. The size of the module should not be too big or too small. If the module size is too large, it is not generally functionally cohesive. If the module size is too small, it leads to unnecessary overheads.

    8. Module Interface: A module with a complex interface should be carefully examined.

    9. Side-effects: When a module is invoked, it sometimes has a side effect of modifying the program state. Such side-effect should be avoided where as possible.

  • Coding in Software Engineering

    The coding is the process of transforming the design of a system into a computer language format. This coding phase of software development is concerned with software translating design specification into the source code. It is necessary to write source code & internal documentation so that conformance of the code to its specification can be easily verified.

    Coding is done by the coder or programmers who are independent people than the designer. The goal is not to reduce the effort and cost of the coding phase, but to cut to the cost of a later stage. The cost of testing and maintenance can be significantly reduced with efficient coding.

    Goals of Coding

    1. To translate the design of system into a computer language format: The coding is the process of transforming the design of a system into a computer language format, which can be executed by a computer and that perform tasks as specified by the design of operation during the design phase.
    2. To reduce the cost of later phases: The cost of testing and maintenance can be significantly reduced with efficient coding.
    3. Making the program more readable: Program should be easy to read and understand. It increases code understanding having readability and understandability as a clear objective of the coding activity can itself help in producing more maintainable software.

    For implementing our design into code, we require a high-level functional language. A programming language should have the following characteristics:

    Characteristics of Programming Language

    Following are the characteristics of Programming Language:

    Coding

    Readability: A good high-level language will allow programs to be written in some methods that resemble a quite-English description of the underlying functions. The coding may be done in an essentially self-documenting way.

    Portability: High-level languages, being virtually machine-independent, should be easy to develop portable software.

    Generality: Most high-level languages allow the writing of a vast collection of programs, thus relieving the programmer of the need to develop into an expert in many diverse languages.

    Brevity: Language should have the ability to implement the algorithm with less amount of code. Programs mean in high-level languages are often significantly shorter than their low-level equivalents.

    Error checking: A programmer is likely to make many errors in the development of a computer program. Many high-level languages invoke a lot of bugs checking both at compile-time and run-time.

    Cost: The ultimate cost of a programming language is a task of many of its characteristics.

    Quick translation: It should permit quick translation.

    Efficiency: It should authorize the creation of an efficient object code.

    Modularity: It is desirable that programs can be developed in the language as several separately compiled modules, with the appropriate structure for ensuring self-consistency among these modules.

    Widely available: Language should be widely available, and it should be feasible to provide translators for all the major machines and all the primary operating systems.

    A coding standard lists several rules to be followed during coding, such as the way variables are to be named, the way the code is to be laid out, error return conventions, etc.

    Coding Standards

    General coding standards refers to how the developer writes code, so here we will discuss some essential standards regardless of the programming language being used.

    The following are some representative coding standards:

    Coding
    1. Indentation: Proper and consistent indentation is essential in producing easy to read and maintainable programs.
      Indentation should be used to:
      • Emphasize the body of a control structure such as a loop or a select statement.
      • Emphasize the body of a conditional statement
      • Emphasize a new scope block
    2. Inline comments: Inline comments analyze the functioning of the subroutine, or key aspects of the algorithm shall be frequently used.
    3. Rules for limiting the use of global: These rules file what types of data can be declared global and what cannot.
    4. Structured Programming: Structured (or Modular) Programming methods shall be used. “GOTO” statements shall not be used as they lead to “spaghetti” code, which is hard to read and maintain, except as outlined line in the FORTRAN Standards and Guidelines.
    5. Naming conventions for global variables, local variables, and constant identifiers: A possible naming convention can be that global variable names always begin with a capital letter, local variable names are made of small letters, and constant names are always capital letters.
    6. Error return conventions and exception handling system: Different functions in a program report the way error conditions are handled should be standard within an organization. For example, different tasks while encountering an error condition should either return a 0 or 1 consistently.

    Coding Guidelines

    General coding guidelines provide the programmer with a set of the best methods which can be used to make programs more comfortable to read and maintain. Most of the examples use the C language syntax, but the guidelines can be tested to all languages.

    The following are some representative coding guidelines recommended by many software development organizations.

    Coding

    1. Line Length: It is considered a good practice to keep the length of source code lines at or below 80 characters. Lines longer than this may not be visible properly on some terminals and tools. Some printers will truncate lines longer than 80 columns.

    2. Spacing: The appropriate use of spaces within a line of code can improve readability.

    Example:

    Bad:        cost=price+(price*sales_tax)
                    fprintf(stdout ,”The total cost is %5.2f\n”,cost);

    Better:      cost = price + ( price * sales_tax )
                      fprintf (stdout,”The total cost is %5.2f\n”,cost);

    3. The code should be well-documented: As a rule of thumb, there must be at least one comment line on the average for every three-source line.

    4. The length of any function should not exceed 10 source lines: A very lengthy function is generally very difficult to understand as it possibly carries out many various functions. For the same reason, lengthy functions are possible to have a disproportionately larger number of bugs.

    5. Do not use goto statements: Use of goto statements makes a program unstructured and very tough to understand.

    6. Inline Comments: Inline comments promote readability.

    7. Error Messages: Error handling is an essential aspect of computer programming. This does not only include adding the necessary logic to test for and handle errors but also involves making error messages meaningful.

  • What is User Interface (UI) Design?

    The visual part of a computer application or operating system through which a client interacts with a computer or software. It determines how commands are given to the computer or the program and how data is displayed on the screen.

    Types of User Interface

    There are two main types of User Interface:

    • Text-Based User Interface or Command Line Interface
    • Graphical User Interface (GUI)

    Text-Based User Interface: This method relies primarily on the keyboard. A typical example of this is UNIX.

    Advantages

    • Many and easier to customizations options.
    • Typically capable of more important tasks.

    Disadvantages

    • Relies heavily on recall rather than recognition.
    • Navigation is often more difficult.

    Graphical User Interface (GUI): GUI relies much more heavily on the mouse. A typical example of this type of interface is any versions of the Windows operating systems.

    GUI Characteristics

    CharacteristicsDescriptions
    WindowsMultiple windows allow different information to be displayed simultaneously on the user’s screen.
    IconsIcons different types of information. On some systems, icons represent files. On other icons describes processes.
    MenusCommands are selected from a menu rather than typed in a command language.
    PointingA pointing device such as a mouse is used for selecting choices from a menu or indicating items of interests in a window.
    GraphicsGraphics elements can be mixed with text or the same display.

    Advantages

    • Less expert knowledge is required to use it.
    • Easier to Navigate and can look through folders quickly in a guess and check manner.
    • The user may switch quickly from one task to another and can interact with several different applications.

    Disadvantages

    • Typically decreased options.
    • Usually less customizable. Not easy to use one button for tons of different variations.

    UI Design Principles

    User Interface Design

    Structure: Design should organize the user interface purposefully, in the meaningful and usual based on precise, consistent models that are apparent and recognizable to users, putting related things together and separating unrelated things, differentiating dissimilar things and making similar things resemble one another. The structure principle is concerned with overall user interface architecture.

    Simplicity: The design should make the simple, common task easy, communicating clearly and directly in the user’s language, and providing good shortcuts that are meaningfully related to longer procedures.

    Visibility: The design should make all required options and materials for a given function visible without distracting the user with extraneous or redundant data.

    Feedback: The design should keep users informed of actions or interpretation, changes of state or condition, and bugs or exceptions that are relevant and of interest to the user through clear, concise, and unambiguous language familiar to users.

    Tolerance: The design should be flexible and tolerant, decreasing the cost of errors and misuse by allowing undoing and redoing while also preventing bugs wherever possible by tolerating varied inputs and sequences and by interpreting all reasonable actions.

  • Object-Oriented Design in Software Engineering

    In the object-oriented design method, the system is viewed as a collection of objects (i.e., entities). The state is distributed among the objects, and each object handles its state data. For example, in a Library Automation Software, each library representative may be a separate object with its data and functions to operate on these data. The tasks defined for one purpose cannot refer or change data of other objects. Objects have their internal data which represent their state. Similar objects create a class. In other words, each object is a member of some class. Classes may inherit features from the superclass.

    The different terms related to object design are:

    Object-Oriented Design
    1. Objects: All entities involved in the solution design are known as objects. For example, person, banks, company, and users are considered as objects. Every entity has some attributes associated with it and has some methods to perform on the attributes.
    2. Classes: A class is a generalized description of an object. An object is an instance of a class. A class defines all the attributes, which an object can have and methods, which represents the functionality of the object.
    3. Messages: Objects communicate by message passing. Messages consist of the integrity of the target object, the name of the requested operation, and any other action needed to perform the function. Messages are often implemented as procedure or function calls.
    4. Abstraction In object-oriented design, complexity is handled using abstraction. Abstraction is the removal of the irrelevant and the amplification of the essentials.
    5. Encapsulation: Encapsulation is also called an information hiding concept. The data and operations are linked to a single unit. Encapsulation not only bundles essential information of an object together but also restricts access to the data and methods from the outside world.
    6. Inheritance: OOD allows similar classes to stack up in a hierarchical manner where the lower or sub-classes can import, implement, and re-use allowed variables and functions from their immediate superclasses.This property of OOD is called an inheritance. This makes it easier to define a specific class and to create generalized classes from specific ones.
    7. Polymorphism: OOD languages provide a mechanism where methods performing similar tasks but vary in arguments, can be assigned the same name. This is known as polymorphism, which allows a single interface is performing functions for different types. Depending upon how the service is invoked, the respective portion of the code gets executed.