Category: Uncategorized

  • What is Project?

    A project is a group of tasks that need to complete to reach a clear result. A project also defines as a set of inputs and outputs which are required to achieve a goal. Projects can vary from simple to difficult and can be operated by one person or a hundred.

    Projects usually described and approved by a project manager or team executive. They go beyond their expectations and objects, and it’s up to the team to handle logistics and complete the project on time. For good project development, some teams split the project into specific tasks so they can manage responsibility and utilize team strengths.

    What is software project management?

    Software project management is an art and discipline of planning and supervising software projects. It is a sub-discipline of software project management in which software projects planned, implemented, monitored and controlled.

    It is a procedure of managing, allocating and timing resources to develop computer software that fulfills requirements.

    In software Project Management, the client and the developers need to know the length, period and cost of the project.

    Prerequisite of software project management?

    There are three needs for software project management. These are:

    1. Time
    2. Cost
    3. Quality

    It is an essential part of the software organization to deliver a quality product, keeping the cost within the client?s budget and deliver the project as per schedule. There are various factors, both external and internal, which may impact this triple factor. Any of three-factor can severely affect the other two.

    Project Manager

    A project manager is a character who has the overall responsibility for the planning, design, execution, monitoring, controlling and closure of a project. A project manager represents an essential role in the achievement of the projects.

    A project manager is a character who is responsible for giving decisions, both large and small projects. The project manager is used to manage the risk and minimize uncertainty. Every decision the project manager makes must directly profit their project.

    Role of a Project Manager:

    1. Leader

    A project manager must lead his team and should provide them direction to make them understand what is expected from all of them.

    2. Medium:

    The Project manager is a medium between his clients and his team. He must coordinate and transfer all the appropriate information from the clients to his team and report to the senior management.

    3. Mentor:

    He should be there to guide his team at each step and make sure that the team has an attachment. He provides a recommendation to his team and points them in the right direction.

    Responsibilities of a Project Manager:

    1. Managing risks and issues.
    2. Create the project team and assigns tasks to several team members.
    3. Activity planning and sequencing.
    4. Monitoring and reporting progress.
    5. Modifies the project plan to deal with the situation.
  • Trust Region Methods

    In reinforcement learning, especially in policy optimization techniques, the main goal is to modify the agent’s policy to improve the performance without affecting it’s behavior. This is important when working with deep neural networks, especially if updates are large or not properly limited there might be a case of instability. Trust regions help maintain stability by guaranteeing that parameter updates are smooth and effective during training.

    What is Trust Region?

    A trust region is a concept used in optimization that restricts updates to the policy or value function in training, maintaining stability and reliability in the learning process. Trust regions assist in limiting the extent to which the model’s parameters, like policy networks, are allowed to vary during updates. This will help in avoiding large or unpredictable changes that may disrupt the learning process.

    Role of Trust Regions in Policy Optimization

    The idea of trust regions is used to regulate the extent to which the policy can be altered during updates. This guarantees that every update improves the policy without implementing drastic changes that could cause instability or affect performance. Some of the aspects where trust regions play an important role are −

    • Policy Gradient − Trust regions are often used in these methods to modify the policy to optimize expected rewards. However, in the absence of a trust region, important updates can result in unpredictable behavior, particularly when employing function approximators such as deep neural networks.
    • KL Divergence − This is in Trust Region Policy Optimization (TRPO) which serves as the criteria for evaluating the extent of policy changes by calculating the divergence between the old and new policies. The main concept is that the minor policy changes tend to enhance the agent’s performance consistently, whereas major changes may lead to instability.
    • Surrogate Objective in PPO − It is used to estimate the trust region through a surrogate objective function incorporating a clipping mechanism. The primary goal is to prevent major changes in the policy by implementing penalties on big deviations from the previous policy. Additionally, this will improve the performance of the policy.

    Trust Region Methods for Deep Reinforcement Learning

    Following is a list of algorithms that use trust regions in deep reinforcement learning to ensure that updates are effective and reliable, improving the overall performance −

    1. Trust Region Policy Optimization

    Trust Region Policy Optimization (TRPO) is a reinforcement learning algorithm that aims to enhance policies in a more efficient and steady way. It deals with the issue of large, unstable updates that usually occur in policy gradient methods by introducing trust region constraint.

    The constraint used in TRPO is Kullback-Leibler(KL) divergence, as a restriction to guarantee minimal variation between the old and new policies through the assessment of their disparity. This process helps TRPO in maintaining stability of the learning process and improves the efficiency of the policy.

    The TRPO algorithm works by consistently modifying the policy parameters to improve a surrogate objective function with the boundaries of the trust region constraint. For this it is necessary to find a solution for the dilemma of enhancing the policy while maintaining stability.

    2. Proximal Policy Optimization

    Proximal Policy Optimization is a reinforcement learning algorithm whose aim is to enhance the consistency and dependability of policy updates. This process uses an alternative objective function along with the clipping mechanism to avoid extreme adjustments to policies. This approach ensures that there isn’t much difference between the new policy and old , additionally maintaining a balance between exploration and exploitation.

    PPO is an easier and effective among all the trust region techniques. It is widely used in many applications like robotics, autonomous cars because of its reliability and simplicity. The algorithm includes collecting a set of experiences, calculating the advantage estimates, and carrying out several rounds of stochastic gradient descent to modify the policy.

    3. Natural Gradient Descent

    This technique modifies the step size according to the curvature of the objective function to form a trust region surrounding the current policy. It is particularly effective in high-dimensional environments.

    Challenges in Trust Regions

    There are certain challenges while implementing trust region techniques in deep reinforcement learning −

    • Most trust region techniques like TRPO and PPO require approximations, which can violate constraints or fail to find the optimal solution within the trust region.
    • The techniques can be computationally intensive, especially with high-dimensional spaces.
    • These techniques often require a wide range of samples for effective learning.
    • The efficiency of trust region techniques highly depends on the choice of hyperparameters. Tuning these parameters is quite challenging and often requires expertise.
  • Deep Deterministic Policy Gradient (DDPG)

    Deep Deterministic Policy Gradient (DDPG) is an algorithm that simultaneously learns from both Q-function and a policy. It learns the Q-function using off-policy data and the Bellman equation, which is then used to learn the policy.

    What is Deep Deterministic Policy Gradient?

    Deep Deterministic Policy Gradient (DDPG) is a reinforcement learning algorithm created to address problems with continuous action spaces. This algorithm, which is based on the actor-critic architecture, is off-policy and also a combination of Q-learning and policy gradient methods. DDPG is an off-policy algorithm that is model-free and uses deep learning to estimate value functions and policies, making it suitable for tasks involving continuous actions like robotic control and autonomous driving.

    In simple, it expands Deep Q-Networks (DQN) to continuous action spaces with a deterministic policy instead of the usual stochastic policies in DQN or REINFORCE.

    Key Concepts in DDPG

    The key concepts involved in Deep Deterministic Policy Gradient (DDPG) are −

    • Policy Gradient Theorem − The deterministic policy gradient theorem is employed by DDPG, which allows the calculation of the gradient of the expected return in relation to the policy parameters. Additionally, this gradient is used for updating the actor network.
    • Off-Policy − DDPG is an off-policy algorithm, indicating it learns from experiences created by a policy that is not the one being optimized. This is done by storing previous experiences in the replay buffer and using them for learning.

    What is Deterministic in DDPG?

    A deterministic strategy maps states with actions. When you provide a state to the function, it gives back an action to perform. In comparison with the value function, where we obtain probability function for every state. Deterministic policies are used in deterministic environments where the actions taken determine the outcome.

    Core Components in DDPG

    Following the core components used in Deep Deterministic Policy Gradient (DDPG) −

    • Actor-Critic Architecture − While the actor is the policy network, it takes the state as input and outputs a deterministic action. The critic is the Q-function approximator that calculates the action-value function Q(s,a). It considers both the state and the action as input and predicts the expected return.
    • Deterministic Policy − DDPG uses deterministic policy instead of stochastic policies, which are mostly used by algorithms like REINFORCE or other policy gradient methods. The actor produces one action for a given state rather than a range of actions.
    • Experience Relay − DDPG uses an experience replay buffer for storing previous experiences in tuples consisting of state, action, reward, and next state. The buffer is used for selecting mini-batches in order to break the temporal dependencies among successive experiences, ultimately helping to improve the training stability.
    • Target Networks − In order to ensure stability in learning, DDPG employs target networks for both the actor and the critic. These updated versions of the original networks are gradually improved to decrease the variability of updates when training.
    • Exploration Noise − Since DDPG is a deterministic policy gradient method, the policy is inherently greedy and would not explore the environment sufficiently.

    How does DDPG Work?

    Deep Deterministic Policy Gradient (DDPG) is a reinforcement learning algorithm used particularly for continuous action spaces. It is an actor-critic method i.e., it uses two models actor, which decides the action to be taken in the current state and critic, which assesses the effectiveness of the action taken. The working of DDPG is described below −

    Continuous Action Spaces

    DDPG is effective with environments that have continuous action spaces like controlling the speed and direction of car’s, in contrast to discrete action spaces found in games.

    Experience Replay

    DDPG uses experience replay by storing the agent’s experiences in a buffer and sampling random batches of experiences for updating the networks. The tuple is represented as (st,at,rt,st+1)(st,at,rt,st+1), where −

    • stst represents the state at time tt.
    • atat represents the action taken.
    • rtrt represents the reward received.
    • st+1st+1 represents the new state after the action.

    Randomly selecting experiences from the replay buffer reduces the correlation between consecutive events, leading to more stable training.

    Actor-Critic Training

    • Critic Update − This critic update is based on Temporal Difference (TD) Learning, particularly the TD(0)TD(0) variation. The main task of the critic is to assess the actor’s decisions by calculating the Q-value, which predicts the future rewards for specific state-action combinations. Additionally, the critic update in DDPG consists of reducing the TD error (which is the difference between the predicted Q-value and the target Q-value).
    • Actor Update − The actor update involves modifying the actor’s neural network to enhance the policy, or decision-making process. In the process of updating the actor, the Q-value gradient is calculated in relation to the action, and the actor’s network is adjusted using gradient ascent to boost the likelihood of choosing actions that result in higher Q-values, enhancing the policy in the end.

    Target Networks and Soft Updates

    Instead of directly copying learned networks to target networks, DDPG employs a soft update approach, which updates target networks with a portion of the learned networks.

    θ′←τ+(1−τ)θ′θ′←τ+(1−τ)θ′ where, ττ is a small value that ensures slow updates and improves stability.

    Exploration-exploitation

    DDPG uses Ornstein-Uhlenbeck noise in addition to the actions to promote exploration, as deterministic policies could become trapped in less than ideal solutions with continuous action spaces. The agent is motivated by the noise to explore the environment.

    Challenges in DDPG

    The two main challenges in DDPG that have to be addressed are −

    • Instability − DDPG may experience stability issues in training, especially when employed with function approximators such as neural networks. This is dealt using target networks and experience replay, however, it still needs precise adjustment of hyper parameters.
    • Exploration − Even with the use of Ornstein-Uhlenbeck noise for exploration, DDPG could face difficulties in extremely complicated environments if exploration strategies are not effective.
  • Deep Q-Networks (DQN)

    What are Deep Q-Networks?

    Deep Q-Network (DQN) is an algorithm in the field of reinforcement learning. It is a combination of deep neural networks and Q-learning, enabling agents to learn optimal policies in complex environments. While the traditional Q-learning works effectively for environments with a small and finite number of states, but it struggles with large or continuous state spaces due to the size of the Q-table. This limitation is overruled by Deep Q-Networks by replacing the Q-table with neural network that can approximate the Q-values for every state-action pair.

    Key Components of Deep Q-Networks

    Following is a list of components that are a part of the architecture of Deep Q-Networks −

    • Input Layer − This layer receives state information from the environment in the form of a vector of numerical values.
    • Hidden Layers − The DQN’s hidden layer consist of multiple fully connected neuron that transform the input data into more complex features that ate more suitable for predictions.
    • Output Layer − Each possible action in the current state is represented by a single neuron in the DQN’s output layer. The output values of these neurons represent the estimated value of each action within that state.
    • Memory − DQN utilizes a memory replay to store the training events of the agent. All the information including the current state, action taken, the reward received, and the next state are stored as tuples in the memory.
    • Loss Function − the DQN computes the difference between the actual Q-values form replay memory and predicted Q-values to determine loss.
    • Optimization − It involves adjusting the network’s weights in order to minimize the loss function. Usually, stochastic gradient descent (SGD) is employed for this purpose.

    The following image depicts the components in the deep q-network architecture –

    Deep Q-Network Architechture

    How Deep Q-Networks Work?

    The working of DQN involves the following steps −

    Neural Network Architecture −

    The DQN uses a sequence of frames (such as images from a game) for input and generates a set of Q-values for every potential action at that particular state. the typical configuration includes convolutional layers for spatial relationships and fully connected layers for Q-values output.

    Experience Replay

    While training, the agent stores its interactions (state, action, reward, next state) in a replay buffer. Sampling random batches from this buffer trains the network, reducing correlation between consecutive experiences and improve training stability.

    Target Network

    In order to stabilize the training process, Deep Q-Networks employ a distinct target network for producing Q-value targets. the target network receives regular updates of weighs from the main network to minimize divergence risk while training.

    Epsilon-Greedy Policy

    The agent uses an epsilon-greedy strategy, where it selects a random action with probability ϵϵ and the action with highest Q-value with probability 1−ϵ1−ϵ. This balance between exploration and exploitation helps the agent learn effectively.

    Training Process

    The neural network is trained using gradient descent to minimize the loss between the predicted Q-values and the target Q-values. The target Q-values are calculated using the Bellman equation, which incorporates the reward received and the maximum Q-value of the nect state.

    Limitations of Deep Q-Networks

    Deep Q-Networks (DQNs) have several limitations that impacts it’s efficiency and performance −

    • DQN’s suffer from instability due to the non-stationarity problem caused from frequent neural network updates.
    • DQN’s at times over estimate Q-values, which might have an negative impact on the learning process.
    • DQN’s require many samples to learn well, which can be expensive and time-consuming in terms of computation.
    • DQN performance is greatly influence by the selection of hyper parameters, such as learning rate, discount factor, and exploration rate.
    • DQNs are mainly intended for discrete action spaces and might face difficulties in environments with continuous action spaces.

    Double Deep Q-Networks

    Double DQN is an extended version of Deep Q-Network created to address an issues in the basic DQN method − Overestimation bias in Q-value updates. The overestimation bias is caused by the fact that the Q-learning update rule utilizes the same Q-network for choosing and assessing actions, resulting in inflated estimates of the Q-values. This problem can cause instability in training and hinder the learning process. The two different networks used in Double DQN to solve this issue −

    • Q-Networks, responsible for choosing the action
    • Target Network, assess the worth of the chosen action.

    The major modification in Double DQN lies in how the target is calculated. Rather than using only Q-network for choosing and assessing the next action, Double DQN involves using the Q-network for selecting the action in the subsequent state and the target network for evaluating the Q-value of the chosen action. This separation decreases the tendency to overestimate and results in more precise value calculations. Due to this, Double DQN offers a more consistent and dependable training process, especially in scenarios such as Atari games, where the regular DQN approach may face challenges with overestimation.

    Dueling Deep Q-Networks

    Dueling Deep Q-Networks (Dueling DQN), improves the learning process of the traditional Deep Q-Network (DQN) by separating the estimation of state values from action advantages. In the traditional DQN, an individual Q-value is calculated for every state-action combination, representing the expected cumulative reward. However, this can be inefficient, particularly when numerous actions result in similar consequences. Dueling DQN handles this issue by breaking down the Q-value into two primary parts: the state value V(s)V(s) and the advantage function A(s,a)A(s,a). The Q-value is then given by Q(s,a)=V(s)+A(s,a)Q(s,a)=V(s)+A(s,a), where V(s) captures the value of being in a given state, and A(s,a)A(s,a) measures how much better an action is over others in the same state.

    Dueling DQN helps the agent to enhance its understanding of the environment and prevent the learning of unnecessary action-value estimates by separately estimating state values and action advantages. This results in improved performance, particularly in situations with delayed rewards, allowing the agent to gain a better understanding of the importance of various states when choosing the optimal action.

  • Deep Reinforcement Learning Algorithms

    Deep reinforcement learning algorithms are a type of algorithms in machine learning that combines deep learning and reinforcement learning.

    Deep reinforcement learning addresses the challenge of enabling computational agents to learn decision-making by incorporating deep learning from unstructured input data without manual engineering of the state space.

    Deep reinforcement learning algorithms are capable of deciding what actions to perform for the optimization of an objective even with large inputs.

    Reinforcement Learning

    Reinforcement Learning consists of an agent that learns from the feedback given in response to its actions while exploring an environment. The main goal of the agent is to maximize cumulative rewards by developing a strategy that guides decision-making in all possible scenarios.

    Role of Deep Learning in Reinforcement Learning

    In traditional reinforcement learning algorithms, tables or basic function approximates are commonly used to represent value functions, policies, or models. Well, these strategies are not efficient enough to be applied in challenging settings like video games, robotics or natural language processing. Neural networks allow for the approximation of complex, multi-dimensional functions through deep learning. This forms the basis of Deep Reinforcement Learning.

    Some of the benefits of the combination of deep learning networks and reinforcement learning are −

    • Dealing with inputs with high dimensions (such as raw images and continuous sensor data).
    • Understanding complex relationships between states and actions through learning.
    • Learning a common representation by generalizing among different states and actions.

    Deep Reinforcement Learning Algorithms

    The following are some of the common deep reinforcement learning algorithms are −

    1. Deep Q-Networks

    A Deep Q-Network (DQN) is an extension of conventional Q-learning that employs deep neural networks to estimate the action-value function Q(s,a)Q(s,a). Instead of storing Q-values within a table, DQN uses a neural network to deal with complicated input domains like game pixel data. This makes reinforcement learning appropriately address complex tasks, like playing Atari, where the agent learns from visual inputs.

    DQN improves training stability through two primary methods: experience replay, which stores and selects past experiences, and target networks to maintain consistent Q-value targets by refreshing a different network periodically. These advancements assist DQN in effectively acquiring knowledge in large-scale settings.

    2. Double Deep Q-Networks

    Double Deep Q-Network (DDQN) enhances Deep Q-Network (DQN) by mitigating the problem of overestimation bias in Q-value updates. In typical DQN, a single Q-network is utilized for both action selection and value estimation, potentially resulting in overly optimistic value approximations.

    DDQN uses two distinct networks to manage action selection and evaluation − a current Q-network for choosing the action and a target Q-network for evaluating the action. This decrease in bias in the Q-value estimates leads to improved learning accuracy. DDQN incorporates the experience replay and target network methods used in DQN to improve the robustness and dependability.

    3. Dueling Deep Q-Networks

    Dueling Deep Q-Networks (Dueling DQN) is an extension to the standard Deep Q-Network (DQN) used in reinforcement learning. It separates the Q-value into two components − the state value function V(s)V(s) and the advantage function A(s,a)A(s,a), which estimates the ratio of the value for each action to the average value.

    The final Q-value is estimated by combining all these elements. This form of representation reduces the strength and effectiveness of Q-learning, where the model can estimate the state value more accurately and the need for accurate action values in certain situations is minimized.

    4. Policy Gradient Methods

    Policy Gradient Methods are algorithms based on a policy iteration approach where policy is directly manipulated to reach the optimal policy that maximizes the expected reward. Rather than focusing on learning a value function, these strategies have been developed in order to maximize rewards by optimizing the policy with respect to the gradient of the defined objective with respect to policy parameters.

    The main objective is computing the average reward gradient and strategy modification. The following are the algorithms: REINFORCEActor-Critic, and Proximal Policy Optimization (PPO). These approaches can be applied effectively in high or continuous dimensional spaces.

    5. Proximal Policy Optimization

    A Proximal Policy Optimization (PPO) algorithm in reinforcement learning with an approach to achieve more stable and efficient policy optimization. This approach updates policies by maximizing an objective function associated with the policy, but puts a cap on the amount of allowance for a policy update in order to avoid drastic changes in a policy.

    A new policy cannot be too far from an old policy, hence PPO adopts a clipped objective to ensure no policy ever changes drastically from the last policy. By using a clipped objective, PPO will prevent large changes in policy between the old and new one. This balance between the means of exploration and exploitation avoids performance degradation and promotes smoother convergence. PPO is applied in deep reinforcement learning for both continuous and discrete action spaces due to its simplicity and effectiveness.

  • Deep Reinforcement Learning

    What is Deep Reinforcement Learning?

    Deep Reinforcement Learning (Deep RL) is a subset of Machine Learning that is a combination of reinforcement learning with deep learning. Deep RL addresses the challenge of enabling computational agents to learn decision-making by incorporating deep learning from unstructured input data without manual engineering of the state space. Deep RL algorithms are capable of deciding what actions to perform for the optimization of an objective even with large inputs.

    Key Concepts of Deep Reinforcement Learning

    The building blocks of Deep Reinforcement Learning include all the aspects that empower learning and agents for decision-making. Effective environments are produced by the collaboration of the following elements −

    • Agent − The learner and decision-maker who interacts with the environment. This agent acts according to the policies and gains experience.
    • Environment − The system outside agent that it communicates with. It gives the agent feedback in the form of incentives or punishments based on its actions.
    • State − Represents the current situation or condition of the environment at a specific moment, based on which the agent takes a decision.
    • Action − A choice the agent makes that changes the state of the system.
    • Policy − A plan that directs the agent’s decision-making by mapping states to actions.
    • Value Function − Estimates the expected cumulative reward an agent can achieve from a given state while following a specific policy.
    • Model − Represents the environment’s dynamics, allowing the agent to simulate potential outcomes of actions and states for planning purposes.
    • Exploration – Exploitation Strategy − A decision-making approach that balances exploring new actions for learning versus exploiting known actions for immediate rewards.
    • Learning Algorithm − The method by which the agent updates its value function or policy based on experiences gained from interacting with the environment.
    • Experience Replay − A technique that randomly samples from previously stored experiences during training to enhance learning stability and reduce correlations between consecutive events.

    How Deep Reinforcement Learning Works?

    Deep Reinforcement Learning uses artificial neural networks, which consist of layers of nodes that replicate the functioning of neurons in the human brain. These nodes process and relay information through the trial and error method to determine effective outcomes.

    In Deep RL, the term policy refers to the strategy the computer develops based on the feedback it receives from interaction with its environment. These policies help the computer make decisions by considering its current state and the action set, which includes various options. On selecting these options, a process referred to as “search” through which the computer evaluates different actions and observes the outcomes. This ability to coordinate learning, decision-making, and representation could provide new insights simple to how the human brain operates.

    Architecture is what sets deep reinforcement learning apart, which allows it to learn similar to the human brain. It contains numerous layers of neural networks that are efficient enough to process unlabeled and unstructured data.

    List of Algorithms in Deep RL

    Following is the list of some important algorithms in deep reinforcement learning −

    • Deep Q-Network or Deep Q-Learning
    • Double Deep Q-Learning
    • Actor – Critic Method
    • Deep Deterministic Policy Gradient

    Applications of Deep Reinforcement Learning

    Some prominent fields that use deep Reinforcement Learning are −

    1. Gaming

    Deep RL is used in developing games that are far beyond what is humanly possible. The games designed using Deep RL include Atari 2600 games, Go, Poker, and many more.

    2. Robot Control

    This used robust adversarial reinforcement learning wherein an agent learns to operate in the presence of an adversary that applies disturbances to the system. The goal is to develop an optimal strategy to handle disruptions. AI-powered robots have a wide range of applications, including manufacturing, supply chain automation, healthcare, and many more.

    3. Self-driving Cars

    Deep reinforcement learning is one of the key concepts involved in autonomous driving. Autonomous driving scenarios involve understanding the environment, interacting agents, negotiation, and dynamic decision-making, which is possible only by Reinforcement learning.

    4. Healthcare

    Deep reinforcement learning enabled many advancements in healthcare, like personalization in medication to optimize patient health care, especially for those suffering from chronic conditions.

    Difference Between RL and Deep RL

    The following table highlights the key differences between Reinforcement Learning(RL) and Deep Reinforcement Learning (Deep RL) −

    FeatureReinforcement LearningDeep Reinforcement Learning
    DefinitionIt is a subset of Machine Learning that uses trial and error method for decision making.It is a subset of RL that integrates deep learning for more complex decisions.
    Function ApproximationIt uses simple methods like tabular methods for value estimation.It uses neural networks for value estimation, allowing for more complex representation.
    State RepresentationIt relies on manually engineered features to represent the environment.It automatically learns relevant features from raw input data.
    ComplexityIt is effective for simple environments with smaller state/action spaces.It is effective in high-dimensional, complex environments.
    PerformanceIt is effective in simpler environments but struggles in environments with large and continuous spaces.It excels in complex tasks, including video games or controlling robots.
    ApplicationsCan be used for basic tasks like simple games.Can be used in advanced applications like autonomous driving, game playing, and robotic control.
  • Forms

    This chapter will discuss about Bootstrap forms. A form facilitate user to enter data such as name, email address, password etc, which can then be sent to server for processing. Bootstrap provides classes to create a variety of forms with varied styles, layouts and custom components.

    Basic form

    • Form controls in Bootstrap extend Rebooted form styles with classes. For consistent rendering across browsers and devices with customized display, use these classes .
    • To use more recent input controls, such as email verification, number selection, and other features, be sure to use an appropriate type attribute on all inputs (e.g., email for email addresses or the number for numerical data).

    Following example demonstrates Boostrap’s basic form.https://www.tutorialspoint.com/bootstrap/examples/form_basic_form.php

    Example

    You can edit and try running this code using Edit & Run option.

    <!DOCTYPE html><html lang="en"><head><title>Bootstrap - Form</title><meta charset="UTF-8"><meta http-equiv="X-UA-Compatible" content="IE=edge"><meta name="viewport" content="width=device-width, initial-scale=1.0"><link href="https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css" rel="stylesheet"><script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/js/bootstrap.bundle.min.js"></script></head><body><form><div class="mb-3"><label for=" sampleInputEmail" class="form-label">Username</label><input type="email" class="form-control" id=" sampleInputEmail" aria-describedby="emailHelp"></div><div class="mb-3"><label for="sampleInputPassword" class="form-label">Password</label><input type="password" class="form-control" id="sampleInputPassword"></div><div class="mb-3 form-check"><input type="checkbox" class="form-check-input" id="sampleCheck"><label class="form-check-label" for="sampleCheck">Remember me</label></div><button type="submit" class="btn btn-primary">Log in</button></form></body></html>

    Disabled forms

    • To prevent user interactions and make an input appear lighter, use the disabled boolean attribute.
    • To disable all the controls in a <fieldset>, add the disabled attribute. The <input><select>, and <button> elements of native form controls contained within a fieldset <disabled> are all treated by browsers as disabled, preventing keyboard and mouse interactions with them.
    • If form has custom button-like elements, such as <a class=”btn btn-*”>…</a>, they have pointer-events: none set, whichmeans they are still focusable and keyboard-operable. To prevent them from receiving focus use tabindex=”-1″ and use aria-disabled=”disabled” to signal their state to assistive technologies.
    https://www.tutorialspoint.com/bootstrap/examples/form_disabled_form.php

    Example

    You can edit and try running this code using Edit & Run option.

    <!DOCTYPE html><html lang="en"><head><title>Bootstrap - Form</title><meta charset="UTF-8"><meta http-equiv="X-UA-Compatible" content="IE=edge"><meta name="viewport" content="width=device-width, initial-scale=1.0"><link href="https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css" rel="stylesheet"><script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/js/bootstrap.bundle.min.js"></script></head><body><form><fieldset disabled><div class="mb-3"><label for="disabledEmailInput" class="form-label">Disabled Input</label><input type="text" id="disabledEmailInput" class="form-control" placeholder="Email Id"></div><div class="mb-3"><label for="disabledPasswordInput" class="form-label">Disabled Input</label><select id="disabledPasswordInput" class="form-select"><option>Password</option></select></div><div class="mb-3"><div class="form-check"><input class="form-check-input" type="checkbox" id="disabledcheckbox" disabled><label class="form-check-label" for="disabledcheckbox">
    
              Disabled Check Box
            &lt;/label&gt;&lt;/div&gt;&lt;/div&gt;&lt;button type="submit" class="btn btn-primary"&gt;Disabled Button&lt;/button&gt;&lt;/fieldset&gt;&lt;/form&gt;&lt;/body&gt;&lt;/html&gt;</pre>

    Accessibility

    • Every form control has an appropriate accessible name for assistive technology users. Using label elements or descriptive text within <button>...</button>is the simplest method to achieve this.
    • When a visible "label" or appropriate text content is not provided then use other approaches for accessible names, for example:
      • Use the class .visually-hidden to hide the <label> elements.
      • Use aria-labelledby to pointing an existing element that behaves as a <label>.
      • Including a title attribute.
      • Use aria-label to set the element accessible name.
    • When none of these are available, for accessible name assistive technology use placeholder attribute as a fallback on <input> and <textarea> elements.
    • Using visually hidden content will help assistive technology users, however certain users may still have issues with a lack of visible label text.
  • Tuples

    At times, there might be a need to store a collection of values of varied types. Arrays will not serve this purpose. TypeScript gives us a data type called tuple that helps to achieve such a purpose.

    It represents a heterogeneous collection of values. In other words, tuples enable storing multiple fields of different types. Tuples can also be passed as parameters to functions.

    Syntax

    We can create a tuple using JavaScripts array syntax:

    const tupleName =[value1, value2, value3,...valueN]

    But we need to declare its type as a tuple.

    const tupleName:[type1, type2, type3,...typeN]=[value1, value2, value3,...valueN]

    For Example

    const myTuple:[number,string]=[10,"Hello"];

    You can define a tuple first and then initialize,

    let myTuple:[number,string];// declaring the tuple
    myTuple =[10,"Hello"];// initializing the tuple

    Make sure, the const tuple declared must be initialized.

    You can also declare an empty tuple in Typescript and choose to initialize it later.

    var myTuple =[]; 
    myTuple[0]=10;
    myTuple[1]="Hello";

    Accessing Values in Tuples

    Tuple values are individually called items. Tuples are index based. This means that items in a tuple can be accessed using their corresponding numeric index. Tuple items index starts from zero and extends up to n-1(where n is the tuples size).

    Syntax

    Following is the syntax to access the values in a tuple using its index −

    tupleName[index]

    Example: Simple Tuple

    Open Compiler

    var myTuple:[number,string]=[10,"Hello"];//create a tuple console.log(myTuple[0])console.log(myTuple[1])

    In the above example, a tuple, myTuple, is declared. The tuple contains values of numeric and string types respectively.

    On compiling, it will generate the following code in JavaScript.

    var myTuple =[10,"Hello"];//create a tuple 
    console.log(myTuple[0]);
    console.log(myTuple[1]);

    Its output is as follows −

    10 
    Hello
    

    Example: Empty Tuple

    We can declare an empty tuple as follows and then initialize it.

    Open Compiler

    var tup =[] 
    tup[0]=12 
    tup[1]=23console.log(tup[0])console.log(tup[1])

    On compiling, it will generate the same code in JavaScript.

    Its output is as follows −

    12 
    23 
    

    Tuple Operations

    Tuples in TypeScript supports various operations like pushing a new item, removing an item from the tuple, etc.

    Example

    Open Compiler

    var myTuple:[number,string,string,string]; 
    myTuple =[10,"Hello","World","typeScript"];console.log("Items before push "+ myTuple.length)
    
    myTuple.push(12)// append value to the tuple console.log("Items after push "+ myTuple.length)console.log("Items before pop "+ myTuple.length)// removes and returns the last itemconsole.log(myTuple.pop()+" popped from the tuple")console.log("Items after pop "+ myTuple.length)
    • The push() appends an item to the tuple
    • The pop() removes and returns the last value in the tuple

    On compiling, it will generate the following code in JavaScript.

    var myTuple;
    myTuple =[10,"Hello","World","typeScript"];
    console.log("Items before push "+ myTuple.length);
    myTuple.push(12);// append value to the tuple 
    console.log("Items after push "+ myTuple.length);
    console.log("Items before pop "+ myTuple.length);// removes and returns the last item
    console.log(myTuple.pop()+" popped from the tuple"); 
    console.log("Items after pop "+ myTuple.length);

    The output of the above code is as follows −

    Items before push 4 
    Items after push 5 
    Items before pop 5 
    12 popped from the tuple 
    Items after pop 4
    

    Updating Tuples

    Tuples are mutable which means you can update or change the values of tuple elements.

    Example

    Open Compiler

    var myTuple:[number,string,string,string];// define tuple
    myTuple =[10,"Hello","World","typeScript"];// initialize tupleconsole.log("Tuple value at index 0 "+ myTuple[0])//update a tuple element 
    myTuple[0]=121console.log("Tuple value at index 0 changed to   "+ myTuple[0])

    On compiling, it will generate the following code in JavaScript.

    var myTuple;// define tuple
    myTuple =[10,"Hello","World","typeScript"];// initialize tuple
    console.log("Tuple value at index 0 "+ myTuple[0]);//update a tuple element 
    myTuple[0]=121;
    console.log("Tuple value at index 0 changed to   "+ myTuple[0]);

    The output of the above code is as follows −

    Tuple value at index 0 10 
    Tuple value at index 0 changed to 121
    

    Destructuring a Tuple

    Destructuring refers to breaking up the structure of an entity. TypeScript supports destructuring when used in the context of a tuple.

    Example

    Open Compiler

    var a:[number,string]=[10,"hello"];var[b, c]= a;console.log( b );console.log( c );

    On compiling, it will generate following JavaScript code.

    var a =[10,"hello"];var b = a[0], c = a[1];
    console.log(b);
    console.log(c);

    Its output is as follows −

    10
    hello 
    

    Function Parameters and Tuple Types

    We can define a function to accept explicitly a tuple type. So while calling the function we pass the tuple as argument.

    Example

    Open Compiler

    functionprocessData(data:[string,number]):void{const[name, age]= data;console.log(Name: ${name}, Age: ${age});}let data:[string,number]=["John",32]processData(data);

    We defined here a function processData() that accepts a parameter of tuple type. Inside the function we use tuple destructuring to get the constituent elements. We call the function passing a tuple as argument.

    On compiling, it will generate the following JavaScript code.

    functionprocessData(data){const[name, age]= data;
    
    console.log(Name: ${name}, Age: ${age});}let data =&#91;"John",32];processData(data);</code></pre>

    The output of the above code is as follows −

    Name: John, Age: 32
    
  • Arrays

    The use of variables to store values poses the following limitations −

    • Variables are scalar in nature. In other words, a variable declaration can only contain a single at a time. This means that to store n values in a program n variable declarations will be needed. Hence, the use of variables is not feasible when one needs to store a larger collection of values.
    • Variables in a program are allocated memory in random order, thereby making it difficult to retrieve/read the values in the order of their declaration.

    TypeScript introduces the concept of arrays to tackle the same. An array is a homogenous collection of values. To simplify, an array is a collection of values of the same data type. It is a user defined type.

    Features of an Array

    Here is a list of the features of an array −

    • An array declaration allocates sequential memory blocks.
    • Arrays are static. This means that an array once initialized cannot be resized.
    • Each memory block represents an array element.
    • Array elements are identified by a unique integer called as the subscript / index of the element.
    • Like variables, arrays too, should be declared before they are used. Use the var keyword to declare an array.
    • Array initialization refers to populating the array elements.
    • Array element values can be updated or modified but cannot be deleted.

    Declaring and Initializing Arrays

    To declare an initialize an array in Typescript use the following syntax −

    Syntax

    var array_name[:datatype];        //declaration 
    array_name = [val1,val2,valn..]   //initialization
    

    An array declaration without the data type is deemed to be of the type any. The type of such an array is inferred from the data type of the arrays first element during initialization.

    For example, a declaration like − var numlist:number[] = [2,4,6,8] will create an array as given below −

    Declaring and Initializing Arrays

    The array pointer refers to the first element by default.

    Arrays may be declared and initialized in a single statement. The syntax for the same is −

    var array_name[:data type] = [val1,val2valn]
    

    Note − The pair of [] is called the dimension of the array.

    Accessing Array Elements

    The array name followed by the subscript is used refer to an array element. Its syntax is as follows −

    array_name[subscript] = value
    

    Example: Simple Array

    Open Compiler

    var alphas:string[]; 
    alphas =["1","2","3","4"]console.log(alphas[0]);console.log(alphas[1]);

    On compiling, it will generate following JavaScript code −

    var alphas;
    alphas =["1","2","3","4"];console.log(alphas[0]);console.log(alphas[1]);

    The output of the above code is as follows −

    1 
    2 
    

    Example: Single statement declaration and initialization

    Open Compiler

    var nums:number[]=[1,2,3,3]console.log(nums[0]);console.log(nums[1]);console.log(nums[2]);console.log(nums[3]);

    On compiling, it will generate following JavaScript code −

    var nums =[1,2,3,3];console.log(nums[0]);console.log(nums[1]);console.log(nums[2]);console.log(nums[3]);

    Its output is as follows −

    1 
    2 
    3 
    3 
    

    Array Object

    An array can also be created using the Array object. The Array constructor can be passed.

    • A numeric value that represents the size of the array or
    • A list of comma separated values.

    The following example shows how to create an array using this method.

    Example

    Open Compiler

    var arr_names:number[]=newArray(4)for(var i =0;i<arr_names.length;i++){ 
       arr_names[i]= i *2console.log(arr_names[i])}

    On compiling, it will generate following JavaScript code.

    var arr_names =newArray(4);for(var i =0; i < arr_names.length; i++){
       arr_names[i]= i *2;console.log(arr_names[i]);}

    Its output is as follows −

    0 
    2 
    4 
    6 
    

    Example: Array Constructor accepts comma separated values

    Open Compiler

    var names:string[]=newArray("Mary","Tom","Jack","Jill")for(var i =0;i<names.length;i++){console.log(names[i])}

    On compiling, it will generate following JavaScript code −

    var names =newArray("Mary","Tom","Jack","Jill");for(var i =0; i < names.length; i++){console.log(names[i]);}

    Its output is as follows −

    Mary 
    Tom 
    Jack 
    Jill
    

    Array Methods

    A list of the methods of the Array object along with their description is given below.

    S.No.Method & Description
    1.concat()Returns a new array comprised of this array joined with other array(s) and/or value(s).
    2.every()Returns true if every element in this array satisfies the provided testing function.
    3.filter()Creates a new array with all of the elements of this array for which the provided filtering function returns true.
    4.forEach()Calls a function for each element in the array.
    5.indexOf()Returns the first (least) index of an element within the array equal to the specified value, or -1 if none is found.
    6.join()Joins all elements of an array into a string.
    7.lastIndexOf()Returns the last (greatest) index of an element within the array equal to the specified value, or -1 if none is found.
    8.map()Creates a new array with the results of calling a provided function on every element in this array.
    9.pop()Removes the last element from an array and returns that element.
    10.push()Adds one or more elements to the end of an array and returns the new length of the array.
    11.reduce()Apply a function simultaneously against two values of the array (from left-to-right) as to reduce it to a single value.
    12.reduceRight()Apply a function simultaneously against two values of the array (from right-to-left) as to reduce it to a single value.
    13.reverse()Reverses the order of the elements of an array — the first becomes the last, and the last becomes the first.
    14.shift()Removes the first element from an array and returns that element.
    15.slice()Extracts a section of an array and returns a new array.
    16.some()Returns true if at least one element in this array satisfies the provided testing function.
    17.sort()Sorts the elements of an array.
    18.splice()Adds and/or removes elements from an array.
    19.toString()Returns a string representing the array and its elements.
    20.unshift()Adds one or more elements to the front of an array and returns the new length of the array.

    Array Destructuring

    Refers to breaking up the structure of an entity. TypeScript supports destructuring when used in the context of an array.

    Example

    Open Compiler

    var arr:number[]=[12,13]var[x,y]= arr 
    console.log(x)console.log(y)

    On compiling, it will generate following JavaScript code.

    var arr =[12,13];var x = arr[0], y = arr[1];console.log(x);console.log(y);

    Its output is as follows −

    12 
    13
    

    Array Traversal using forin loop

    One can use the forin loop to traverse through an array.

    Open Compiler

    var j:any;var nums:number[]=[1001,1002,1003,1004]for(j in nums){console.log(nums[j])}

    The loop performs an index based array traversal.

    On compiling, it will generate following JavaScript code.

    var j;var nums =[1001,1002,1003,1004];for(j in nums){console.log(nums[j]);}

    The output of the above code is given below −

    1001 
    1002 
    1003 
    1004
    

    Arrays in TypeScript

    TypeScript supports the following concepts in arrays −

    S.No.Concept & Description
    1.Multi-dimensional arraysTypeScript supports multidimensional arrays. The simplest form of the multidimensional array is the twodimensional array.
    2.Passing arrays to functionsYou can pass to the function a pointer to an array by specifying the array’s name without an index.
    3.Return array from functionsAllows a function to return an array
  • Boolean

    The TypeScript Boolean types represent logical values like true or false. Logical values are used to control the flow of the execution within the program. As JavaScript offers both Boolean primitive and object types, TypeScript adds a type annotation. We can create a boolean primitive as well boolean object.

    We can create a boolean object by using the Boolean() constructor with new keyword.

    To convert a non-boolean value to boolean we should use Boolean() function or double NOT (!!) operator. We should not use Boolean constructor with new keyword.

    Syntax

    To declare a boolean value (primitive) in TypeScript we can use keyword “boolean”.

    let varName:boolean=true;

    In the above syntax, we declare a boolean variable varName and assign the value true.

    To create a Boolean object we use the Boolean() constructor with new Keyword.

    const varName =newBoolean(value);

    In the above syntax, the value is an expression to be converted to Boolean object. The Boolean() constructor with new returns an object containing the boolean value.

    For example,

    const isTrue =newBoolean(true);

    In the above example, isTrue holds value true while isFalse holds the value false.

    Type Annotations

    The type annotations in TypeScript is optional as TypeScript can infer the types of the variable automatically. We can use the boolean keyword to annotate the types of boolean variables.

    let isPresent :boolean=true;// with type annotationlet isPresent =true// without type annotation

    Like variables, the function parameters and return type can also be annotated.

    Truthy and Falsy Values

    In TypeScript, falsy values are the values that are evaluated to false. There are six falsy values

    • false
    • 0 (zero)
    • Empty string (“”)
    • null
    • undefined
    • NaN

    The all other values are truthy.

    Converting a non-boolean value to boolean

    All the above discussed falsy values are converted to false and truthy values are converted to true. To convert a non-boolean value to boolean, we can use the Boolean() function and double NOT (!!) operator.

    Using the Boolean() Function

    The Boolean() function in TypeScript converts a non-boolean value to boolean. It returns a primitive boolean value.

    let varName =Boolean(value);

    The value is an expression to be converted to boolean.

    Example

    In the below example, we use the Boolean() function to convert a non-boolean value to boolean.

    Open Compiler

    const myBoolean1 =Boolean(10);console.log(myBoolean1);// trueconst myBoolean2 =Boolean("");console.log(myBoolean2);// false

    On compiling, the above code will produce the same code in JavaScript. On executing the JavaScript code will produce the following output

    true
    false
    

    Using Double NOT (!!) Operator

    The double NOT (!!)operator in TypeScript converts a non-boolean value to boolean. It returns a primitive boolean value.

    let varName =!!(value);

    The value is an expression to be converted to boolean.

    Example

    In the below example, we use the double NOT (!!) operator to convert a non-boolean value to boolean.

    Open Compiler

    const myBoolean1 =!!(10);console.log(myBoolean1);// trueconst myBoolean2 =!!("");console.log(myBoolean2);// false

    On compiling, the above code will produce the same code in JavaScript. On executing the JavaScript code will produce the following output

    true
    false
    

    Boolean Operations

    The boolean operations or logical operations in TypeScript can be performed using the three logical operators, logical AND, OR and NOT operators. These operations return a boolean value (true or false).

    Example: Logical AND (&&)

    In the example below, we defined two boolean variables x and y. Then perform the logical AND (&&) of these variables.

    Open Compiler

    let x:boolean=true;let y:boolean=false;let result:boolean= x && y;console.log(result);// Output: false

    On compiling, it will generate the following JavaScript code.

    var x =true;var y =false;var result = x && y;console.log(result);// Output: false

    The output of the above example code is as follows

    false
    

    Example: Logical OR (||)

    In the example below, we perform logical OR (||) operation of the two boolean variables x and y.

    Open Compiler

    let x:boolean=true;let y:boolean=false;let result:boolean= x || y;console.log(result);// Output: true

    On compiling, it will generate the following JavaScript code.

    var x =true;var y =false;var result = x || y;console.log(result);// Output: true

    The output of the above example code is as follows

    true
    

    Example: Logical NOT (!)

    The logical NOT (!) operation of the boolean variable isPresent is performed in the below example.

    Open Compiler

    let isPresent:boolean=false;let isAbsent:boolean=!isPresent;console.log(isAbsent);// Output: true

    On compiling, it will generate the same code in JavaScript.

    The output of the above example code is as follows

    true
    

    Conditional Expression with Booleans

    Example: If statement

    The below example shows how to use the conditional expression with boolean in if else statement.

    Open Compiler

    let age:number=25;let isAdult:boolean= age >=18;if(isAdult){console.log('You are an adult.');}else{console.log('You are not an adult.');}

    On compiling it will generate the following JavaScript code

    let age =25;let isAdult = age >=18;if(isAdult){
    
    console.log('You are an adult.');}else{
    console.log('You are not an adult.');}</code></pre>

    The output of the above example code is as follows

    You are an adult.
    

    Example: Conditional Statement (Ternary Operator)

    Try the following example

    Open Compiler

    let score:number=80;let isPassing:boolean= score >=70;let result:string= isPassing ?'Pass':'Fail';console.log(result);// Output: Pass

    On compiling, it will generate the following JavaScript code

    let score =80;let isPassing = score >=70;let result = isPassing ?'Pass':'Fail';
    console.log(result);// Output: Pass

    The output of the above example code is as follows

    Pass
    

    TypeScript Boolean vs boolean

    The Boolean is not same as the boolean type. The Boolean does not refer to the primitive value. Whereas the boolean is primitive data type in TypeScript. You should always use the boolean with lowercase.

    Boolean Objects

    As we have seen above, we can create a Boolean object consisting boolean value using the Boolean constructor with new keyword. The Boolean wrapper class provides us with different properties and methods to work with boolean values.

    Boolean Properties

    Here is a list of the properties of Boolean object

    Sr.No.Property & Description
    1constructorReturns a reference to the Boolean function that created the object.
    2prototypeThe prototype property allows you to add properties and methods to an object.

    In the following sections, we will have a few examples to illustrate the properties of Boolean object.

    Boolean Methods

    Here is a list of the methods of Boolean object and their description.

    Sr.No.Method & Description
    1toSource()Returns a string containing the source of the Boolean object; you can use this string to create an equivalent object.
    2toString()Returns a string of either "true" or "false" depending upon the value of the object.
    3valueOf()Returns the primitive value of the Boolean object.