Category: Uncategorized

  • Trust Region Methods

    In reinforcement learning, especially in policy optimization techniques, the main goal is to modify the agent’s policy to improve the performance without affecting it’s behavior. This is important when working with deep neural networks, especially if updates are large or not properly limited there might be a case of instability. Trust regions help maintain stability by guaranteeing that parameter updates are smooth and effective during training.

    What is Trust Region?

    A trust region is a concept used in optimization that restricts updates to the policy or value function in training, maintaining stability and reliability in the learning process. Trust regions assist in limiting the extent to which the model’s parameters, like policy networks, are allowed to vary during updates. This will help in avoiding large or unpredictable changes that may disrupt the learning process.

    Role of Trust Regions in Policy Optimization

    The idea of trust regions is used to regulate the extent to which the policy can be altered during updates. This guarantees that every update improves the policy without implementing drastic changes that could cause instability or affect performance. Some of the aspects where trust regions play an important role are −

    • Policy Gradient − Trust regions are often used in these methods to modify the policy to optimize expected rewards. However, in the absence of a trust region, important updates can result in unpredictable behavior, particularly when employing function approximators such as deep neural networks.
    • KL Divergence − This is in Trust Region Policy Optimization (TRPO) which serves as the criteria for evaluating the extent of policy changes by calculating the divergence between the old and new policies. The main concept is that the minor policy changes tend to enhance the agent’s performance consistently, whereas major changes may lead to instability.
    • Surrogate Objective in PPO − It is used to estimate the trust region through a surrogate objective function incorporating a clipping mechanism. The primary goal is to prevent major changes in the policy by implementing penalties on big deviations from the previous policy. Additionally, this will improve the performance of the policy.

    Trust Region Methods for Deep Reinforcement Learning

    Following is a list of algorithms that use trust regions in deep reinforcement learning to ensure that updates are effective and reliable, improving the overall performance −

    1. Trust Region Policy Optimization

    Trust Region Policy Optimization (TRPO) is a reinforcement learning algorithm that aims to enhance policies in a more efficient and steady way. It deals with the issue of large, unstable updates that usually occur in policy gradient methods by introducing trust region constraint.

    The constraint used in TRPO is Kullback-Leibler(KL) divergence, as a restriction to guarantee minimal variation between the old and new policies through the assessment of their disparity. This process helps TRPO in maintaining stability of the learning process and improves the efficiency of the policy.

    The TRPO algorithm works by consistently modifying the policy parameters to improve a surrogate objective function with the boundaries of the trust region constraint. For this it is necessary to find a solution for the dilemma of enhancing the policy while maintaining stability.

    2. Proximal Policy Optimization

    Proximal Policy Optimization is a reinforcement learning algorithm whose aim is to enhance the consistency and dependability of policy updates. This process uses an alternative objective function along with the clipping mechanism to avoid extreme adjustments to policies. This approach ensures that there isn’t much difference between the new policy and old , additionally maintaining a balance between exploration and exploitation.

    PPO is an easier and effective among all the trust region techniques. It is widely used in many applications like robotics, autonomous cars because of its reliability and simplicity. The algorithm includes collecting a set of experiences, calculating the advantage estimates, and carrying out several rounds of stochastic gradient descent to modify the policy.

    3. Natural Gradient Descent

    This technique modifies the step size according to the curvature of the objective function to form a trust region surrounding the current policy. It is particularly effective in high-dimensional environments.

    Challenges in Trust Regions

    There are certain challenges while implementing trust region techniques in deep reinforcement learning −

    • Most trust region techniques like TRPO and PPO require approximations, which can violate constraints or fail to find the optimal solution within the trust region.
    • The techniques can be computationally intensive, especially with high-dimensional spaces.
    • These techniques often require a wide range of samples for effective learning.
    • The efficiency of trust region techniques highly depends on the choice of hyperparameters. Tuning these parameters is quite challenging and often requires expertise.
  • Deep Deterministic Policy Gradient (DDPG)

    Deep Deterministic Policy Gradient (DDPG) is an algorithm that simultaneously learns from both Q-function and a policy. It learns the Q-function using off-policy data and the Bellman equation, which is then used to learn the policy.

    What is Deep Deterministic Policy Gradient?

    Deep Deterministic Policy Gradient (DDPG) is a reinforcement learning algorithm created to address problems with continuous action spaces. This algorithm, which is based on the actor-critic architecture, is off-policy and also a combination of Q-learning and policy gradient methods. DDPG is an off-policy algorithm that is model-free and uses deep learning to estimate value functions and policies, making it suitable for tasks involving continuous actions like robotic control and autonomous driving.

    In simple, it expands Deep Q-Networks (DQN) to continuous action spaces with a deterministic policy instead of the usual stochastic policies in DQN or REINFORCE.

    Key Concepts in DDPG

    The key concepts involved in Deep Deterministic Policy Gradient (DDPG) are −

    • Policy Gradient Theorem − The deterministic policy gradient theorem is employed by DDPG, which allows the calculation of the gradient of the expected return in relation to the policy parameters. Additionally, this gradient is used for updating the actor network.
    • Off-Policy − DDPG is an off-policy algorithm, indicating it learns from experiences created by a policy that is not the one being optimized. This is done by storing previous experiences in the replay buffer and using them for learning.

    What is Deterministic in DDPG?

    A deterministic strategy maps states with actions. When you provide a state to the function, it gives back an action to perform. In comparison with the value function, where we obtain probability function for every state. Deterministic policies are used in deterministic environments where the actions taken determine the outcome.

    Core Components in DDPG

    Following the core components used in Deep Deterministic Policy Gradient (DDPG) −

    • Actor-Critic Architecture − While the actor is the policy network, it takes the state as input and outputs a deterministic action. The critic is the Q-function approximator that calculates the action-value function Q(s,a). It considers both the state and the action as input and predicts the expected return.
    • Deterministic Policy − DDPG uses deterministic policy instead of stochastic policies, which are mostly used by algorithms like REINFORCE or other policy gradient methods. The actor produces one action for a given state rather than a range of actions.
    • Experience Relay − DDPG uses an experience replay buffer for storing previous experiences in tuples consisting of state, action, reward, and next state. The buffer is used for selecting mini-batches in order to break the temporal dependencies among successive experiences, ultimately helping to improve the training stability.
    • Target Networks − In order to ensure stability in learning, DDPG employs target networks for both the actor and the critic. These updated versions of the original networks are gradually improved to decrease the variability of updates when training.
    • Exploration Noise − Since DDPG is a deterministic policy gradient method, the policy is inherently greedy and would not explore the environment sufficiently.

    How does DDPG Work?

    Deep Deterministic Policy Gradient (DDPG) is a reinforcement learning algorithm used particularly for continuous action spaces. It is an actor-critic method i.e., it uses two models actor, which decides the action to be taken in the current state and critic, which assesses the effectiveness of the action taken. The working of DDPG is described below −

    Continuous Action Spaces

    DDPG is effective with environments that have continuous action spaces like controlling the speed and direction of car’s, in contrast to discrete action spaces found in games.

    Experience Replay

    DDPG uses experience replay by storing the agent’s experiences in a buffer and sampling random batches of experiences for updating the networks. The tuple is represented as (st,at,rt,st+1)(st,at,rt,st+1), where −

    • stst represents the state at time tt.
    • atat represents the action taken.
    • rtrt represents the reward received.
    • st+1st+1 represents the new state after the action.

    Randomly selecting experiences from the replay buffer reduces the correlation between consecutive events, leading to more stable training.

    Actor-Critic Training

    • Critic Update − This critic update is based on Temporal Difference (TD) Learning, particularly the TD(0)TD(0) variation. The main task of the critic is to assess the actor’s decisions by calculating the Q-value, which predicts the future rewards for specific state-action combinations. Additionally, the critic update in DDPG consists of reducing the TD error (which is the difference between the predicted Q-value and the target Q-value).
    • Actor Update − The actor update involves modifying the actor’s neural network to enhance the policy, or decision-making process. In the process of updating the actor, the Q-value gradient is calculated in relation to the action, and the actor’s network is adjusted using gradient ascent to boost the likelihood of choosing actions that result in higher Q-values, enhancing the policy in the end.

    Target Networks and Soft Updates

    Instead of directly copying learned networks to target networks, DDPG employs a soft update approach, which updates target networks with a portion of the learned networks.

    θ′←τ+(1−τ)θ′θ′←τ+(1−τ)θ′ where, ττ is a small value that ensures slow updates and improves stability.

    Exploration-exploitation

    DDPG uses Ornstein-Uhlenbeck noise in addition to the actions to promote exploration, as deterministic policies could become trapped in less than ideal solutions with continuous action spaces. The agent is motivated by the noise to explore the environment.

    Challenges in DDPG

    The two main challenges in DDPG that have to be addressed are −

    • Instability − DDPG may experience stability issues in training, especially when employed with function approximators such as neural networks. This is dealt using target networks and experience replay, however, it still needs precise adjustment of hyper parameters.
    • Exploration − Even with the use of Ornstein-Uhlenbeck noise for exploration, DDPG could face difficulties in extremely complicated environments if exploration strategies are not effective.
  • Deep Q-Networks (DQN)

    What are Deep Q-Networks?

    Deep Q-Network (DQN) is an algorithm in the field of reinforcement learning. It is a combination of deep neural networks and Q-learning, enabling agents to learn optimal policies in complex environments. While the traditional Q-learning works effectively for environments with a small and finite number of states, but it struggles with large or continuous state spaces due to the size of the Q-table. This limitation is overruled by Deep Q-Networks by replacing the Q-table with neural network that can approximate the Q-values for every state-action pair.

    Key Components of Deep Q-Networks

    Following is a list of components that are a part of the architecture of Deep Q-Networks −

    • Input Layer − This layer receives state information from the environment in the form of a vector of numerical values.
    • Hidden Layers − The DQN’s hidden layer consist of multiple fully connected neuron that transform the input data into more complex features that ate more suitable for predictions.
    • Output Layer − Each possible action in the current state is represented by a single neuron in the DQN’s output layer. The output values of these neurons represent the estimated value of each action within that state.
    • Memory − DQN utilizes a memory replay to store the training events of the agent. All the information including the current state, action taken, the reward received, and the next state are stored as tuples in the memory.
    • Loss Function − the DQN computes the difference between the actual Q-values form replay memory and predicted Q-values to determine loss.
    • Optimization − It involves adjusting the network’s weights in order to minimize the loss function. Usually, stochastic gradient descent (SGD) is employed for this purpose.

    The following image depicts the components in the deep q-network architecture –

    Deep Q-Network Architechture

    How Deep Q-Networks Work?

    The working of DQN involves the following steps −

    Neural Network Architecture −

    The DQN uses a sequence of frames (such as images from a game) for input and generates a set of Q-values for every potential action at that particular state. the typical configuration includes convolutional layers for spatial relationships and fully connected layers for Q-values output.

    Experience Replay

    While training, the agent stores its interactions (state, action, reward, next state) in a replay buffer. Sampling random batches from this buffer trains the network, reducing correlation between consecutive experiences and improve training stability.

    Target Network

    In order to stabilize the training process, Deep Q-Networks employ a distinct target network for producing Q-value targets. the target network receives regular updates of weighs from the main network to minimize divergence risk while training.

    Epsilon-Greedy Policy

    The agent uses an epsilon-greedy strategy, where it selects a random action with probability ϵϵ and the action with highest Q-value with probability 1−ϵ1−ϵ. This balance between exploration and exploitation helps the agent learn effectively.

    Training Process

    The neural network is trained using gradient descent to minimize the loss between the predicted Q-values and the target Q-values. The target Q-values are calculated using the Bellman equation, which incorporates the reward received and the maximum Q-value of the nect state.

    Limitations of Deep Q-Networks

    Deep Q-Networks (DQNs) have several limitations that impacts it’s efficiency and performance −

    • DQN’s suffer from instability due to the non-stationarity problem caused from frequent neural network updates.
    • DQN’s at times over estimate Q-values, which might have an negative impact on the learning process.
    • DQN’s require many samples to learn well, which can be expensive and time-consuming in terms of computation.
    • DQN performance is greatly influence by the selection of hyper parameters, such as learning rate, discount factor, and exploration rate.
    • DQNs are mainly intended for discrete action spaces and might face difficulties in environments with continuous action spaces.

    Double Deep Q-Networks

    Double DQN is an extended version of Deep Q-Network created to address an issues in the basic DQN method − Overestimation bias in Q-value updates. The overestimation bias is caused by the fact that the Q-learning update rule utilizes the same Q-network for choosing and assessing actions, resulting in inflated estimates of the Q-values. This problem can cause instability in training and hinder the learning process. The two different networks used in Double DQN to solve this issue −

    • Q-Networks, responsible for choosing the action
    • Target Network, assess the worth of the chosen action.

    The major modification in Double DQN lies in how the target is calculated. Rather than using only Q-network for choosing and assessing the next action, Double DQN involves using the Q-network for selecting the action in the subsequent state and the target network for evaluating the Q-value of the chosen action. This separation decreases the tendency to overestimate and results in more precise value calculations. Due to this, Double DQN offers a more consistent and dependable training process, especially in scenarios such as Atari games, where the regular DQN approach may face challenges with overestimation.

    Dueling Deep Q-Networks

    Dueling Deep Q-Networks (Dueling DQN), improves the learning process of the traditional Deep Q-Network (DQN) by separating the estimation of state values from action advantages. In the traditional DQN, an individual Q-value is calculated for every state-action combination, representing the expected cumulative reward. However, this can be inefficient, particularly when numerous actions result in similar consequences. Dueling DQN handles this issue by breaking down the Q-value into two primary parts: the state value V(s)V(s) and the advantage function A(s,a)A(s,a). The Q-value is then given by Q(s,a)=V(s)+A(s,a)Q(s,a)=V(s)+A(s,a), where V(s) captures the value of being in a given state, and A(s,a)A(s,a) measures how much better an action is over others in the same state.

    Dueling DQN helps the agent to enhance its understanding of the environment and prevent the learning of unnecessary action-value estimates by separately estimating state values and action advantages. This results in improved performance, particularly in situations with delayed rewards, allowing the agent to gain a better understanding of the importance of various states when choosing the optimal action.

  • Deep Reinforcement Learning Algorithms

    Deep reinforcement learning algorithms are a type of algorithms in machine learning that combines deep learning and reinforcement learning.

    Deep reinforcement learning addresses the challenge of enabling computational agents to learn decision-making by incorporating deep learning from unstructured input data without manual engineering of the state space.

    Deep reinforcement learning algorithms are capable of deciding what actions to perform for the optimization of an objective even with large inputs.

    Reinforcement Learning

    Reinforcement Learning consists of an agent that learns from the feedback given in response to its actions while exploring an environment. The main goal of the agent is to maximize cumulative rewards by developing a strategy that guides decision-making in all possible scenarios.

    Role of Deep Learning in Reinforcement Learning

    In traditional reinforcement learning algorithms, tables or basic function approximates are commonly used to represent value functions, policies, or models. Well, these strategies are not efficient enough to be applied in challenging settings like video games, robotics or natural language processing. Neural networks allow for the approximation of complex, multi-dimensional functions through deep learning. This forms the basis of Deep Reinforcement Learning.

    Some of the benefits of the combination of deep learning networks and reinforcement learning are −

    • Dealing with inputs with high dimensions (such as raw images and continuous sensor data).
    • Understanding complex relationships between states and actions through learning.
    • Learning a common representation by generalizing among different states and actions.

    Deep Reinforcement Learning Algorithms

    The following are some of the common deep reinforcement learning algorithms are −

    1. Deep Q-Networks

    A Deep Q-Network (DQN) is an extension of conventional Q-learning that employs deep neural networks to estimate the action-value function Q(s,a)Q(s,a). Instead of storing Q-values within a table, DQN uses a neural network to deal with complicated input domains like game pixel data. This makes reinforcement learning appropriately address complex tasks, like playing Atari, where the agent learns from visual inputs.

    DQN improves training stability through two primary methods: experience replay, which stores and selects past experiences, and target networks to maintain consistent Q-value targets by refreshing a different network periodically. These advancements assist DQN in effectively acquiring knowledge in large-scale settings.

    2. Double Deep Q-Networks

    Double Deep Q-Network (DDQN) enhances Deep Q-Network (DQN) by mitigating the problem of overestimation bias in Q-value updates. In typical DQN, a single Q-network is utilized for both action selection and value estimation, potentially resulting in overly optimistic value approximations.

    DDQN uses two distinct networks to manage action selection and evaluation − a current Q-network for choosing the action and a target Q-network for evaluating the action. This decrease in bias in the Q-value estimates leads to improved learning accuracy. DDQN incorporates the experience replay and target network methods used in DQN to improve the robustness and dependability.

    3. Dueling Deep Q-Networks

    Dueling Deep Q-Networks (Dueling DQN) is an extension to the standard Deep Q-Network (DQN) used in reinforcement learning. It separates the Q-value into two components − the state value function V(s)V(s) and the advantage function A(s,a)A(s,a), which estimates the ratio of the value for each action to the average value.

    The final Q-value is estimated by combining all these elements. This form of representation reduces the strength and effectiveness of Q-learning, where the model can estimate the state value more accurately and the need for accurate action values in certain situations is minimized.

    4. Policy Gradient Methods

    Policy Gradient Methods are algorithms based on a policy iteration approach where policy is directly manipulated to reach the optimal policy that maximizes the expected reward. Rather than focusing on learning a value function, these strategies have been developed in order to maximize rewards by optimizing the policy with respect to the gradient of the defined objective with respect to policy parameters.

    The main objective is computing the average reward gradient and strategy modification. The following are the algorithms: REINFORCEActor-Critic, and Proximal Policy Optimization (PPO). These approaches can be applied effectively in high or continuous dimensional spaces.

    5. Proximal Policy Optimization

    A Proximal Policy Optimization (PPO) algorithm in reinforcement learning with an approach to achieve more stable and efficient policy optimization. This approach updates policies by maximizing an objective function associated with the policy, but puts a cap on the amount of allowance for a policy update in order to avoid drastic changes in a policy.

    A new policy cannot be too far from an old policy, hence PPO adopts a clipped objective to ensure no policy ever changes drastically from the last policy. By using a clipped objective, PPO will prevent large changes in policy between the old and new one. This balance between the means of exploration and exploitation avoids performance degradation and promotes smoother convergence. PPO is applied in deep reinforcement learning for both continuous and discrete action spaces due to its simplicity and effectiveness.

  • Deep Reinforcement Learning

    What is Deep Reinforcement Learning?

    Deep Reinforcement Learning (Deep RL) is a subset of Machine Learning that is a combination of reinforcement learning with deep learning. Deep RL addresses the challenge of enabling computational agents to learn decision-making by incorporating deep learning from unstructured input data without manual engineering of the state space. Deep RL algorithms are capable of deciding what actions to perform for the optimization of an objective even with large inputs.

    Key Concepts of Deep Reinforcement Learning

    The building blocks of Deep Reinforcement Learning include all the aspects that empower learning and agents for decision-making. Effective environments are produced by the collaboration of the following elements −

    • Agent − The learner and decision-maker who interacts with the environment. This agent acts according to the policies and gains experience.
    • Environment − The system outside agent that it communicates with. It gives the agent feedback in the form of incentives or punishments based on its actions.
    • State − Represents the current situation or condition of the environment at a specific moment, based on which the agent takes a decision.
    • Action − A choice the agent makes that changes the state of the system.
    • Policy − A plan that directs the agent’s decision-making by mapping states to actions.
    • Value Function − Estimates the expected cumulative reward an agent can achieve from a given state while following a specific policy.
    • Model − Represents the environment’s dynamics, allowing the agent to simulate potential outcomes of actions and states for planning purposes.
    • Exploration – Exploitation Strategy − A decision-making approach that balances exploring new actions for learning versus exploiting known actions for immediate rewards.
    • Learning Algorithm − The method by which the agent updates its value function or policy based on experiences gained from interacting with the environment.
    • Experience Replay − A technique that randomly samples from previously stored experiences during training to enhance learning stability and reduce correlations between consecutive events.

    How Deep Reinforcement Learning Works?

    Deep Reinforcement Learning uses artificial neural networks, which consist of layers of nodes that replicate the functioning of neurons in the human brain. These nodes process and relay information through the trial and error method to determine effective outcomes.

    In Deep RL, the term policy refers to the strategy the computer develops based on the feedback it receives from interaction with its environment. These policies help the computer make decisions by considering its current state and the action set, which includes various options. On selecting these options, a process referred to as “search” through which the computer evaluates different actions and observes the outcomes. This ability to coordinate learning, decision-making, and representation could provide new insights simple to how the human brain operates.

    Architecture is what sets deep reinforcement learning apart, which allows it to learn similar to the human brain. It contains numerous layers of neural networks that are efficient enough to process unlabeled and unstructured data.

    List of Algorithms in Deep RL

    Following is the list of some important algorithms in deep reinforcement learning −

    • Deep Q-Network or Deep Q-Learning
    • Double Deep Q-Learning
    • Actor – Critic Method
    • Deep Deterministic Policy Gradient

    Applications of Deep Reinforcement Learning

    Some prominent fields that use deep Reinforcement Learning are −

    1. Gaming

    Deep RL is used in developing games that are far beyond what is humanly possible. The games designed using Deep RL include Atari 2600 games, Go, Poker, and many more.

    2. Robot Control

    This used robust adversarial reinforcement learning wherein an agent learns to operate in the presence of an adversary that applies disturbances to the system. The goal is to develop an optimal strategy to handle disruptions. AI-powered robots have a wide range of applications, including manufacturing, supply chain automation, healthcare, and many more.

    3. Self-driving Cars

    Deep reinforcement learning is one of the key concepts involved in autonomous driving. Autonomous driving scenarios involve understanding the environment, interacting agents, negotiation, and dynamic decision-making, which is possible only by Reinforcement learning.

    4. Healthcare

    Deep reinforcement learning enabled many advancements in healthcare, like personalization in medication to optimize patient health care, especially for those suffering from chronic conditions.

    Difference Between RL and Deep RL

    The following table highlights the key differences between Reinforcement Learning(RL) and Deep Reinforcement Learning (Deep RL) −

    FeatureReinforcement LearningDeep Reinforcement Learning
    DefinitionIt is a subset of Machine Learning that uses trial and error method for decision making.It is a subset of RL that integrates deep learning for more complex decisions.
    Function ApproximationIt uses simple methods like tabular methods for value estimation.It uses neural networks for value estimation, allowing for more complex representation.
    State RepresentationIt relies on manually engineered features to represent the environment.It automatically learns relevant features from raw input data.
    ComplexityIt is effective for simple environments with smaller state/action spaces.It is effective in high-dimensional, complex environments.
    PerformanceIt is effective in simpler environments but struggles in environments with large and continuous spaces.It excels in complex tasks, including video games or controlling robots.
    ApplicationsCan be used for basic tasks like simple games.Can be used in advanced applications like autonomous driving, game playing, and robotic control.
  • Forms

    This chapter will discuss about Bootstrap forms. A form facilitate user to enter data such as name, email address, password etc, which can then be sent to server for processing. Bootstrap provides classes to create a variety of forms with varied styles, layouts and custom components.

    Basic form

    • Form controls in Bootstrap extend Rebooted form styles with classes. For consistent rendering across browsers and devices with customized display, use these classes .
    • To use more recent input controls, such as email verification, number selection, and other features, be sure to use an appropriate type attribute on all inputs (e.g., email for email addresses or the number for numerical data).

    Following example demonstrates Boostrap’s basic form.https://www.tutorialspoint.com/bootstrap/examples/form_basic_form.php

    Example

    You can edit and try running this code using Edit & Run option.

    <!DOCTYPE html><html lang="en"><head><title>Bootstrap - Form</title><meta charset="UTF-8"><meta http-equiv="X-UA-Compatible" content="IE=edge"><meta name="viewport" content="width=device-width, initial-scale=1.0"><link href="https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css" rel="stylesheet"><script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/js/bootstrap.bundle.min.js"></script></head><body><form><div class="mb-3"><label for=" sampleInputEmail" class="form-label">Username</label><input type="email" class="form-control" id=" sampleInputEmail" aria-describedby="emailHelp"></div><div class="mb-3"><label for="sampleInputPassword" class="form-label">Password</label><input type="password" class="form-control" id="sampleInputPassword"></div><div class="mb-3 form-check"><input type="checkbox" class="form-check-input" id="sampleCheck"><label class="form-check-label" for="sampleCheck">Remember me</label></div><button type="submit" class="btn btn-primary">Log in</button></form></body></html>

    Disabled forms

    • To prevent user interactions and make an input appear lighter, use the disabled boolean attribute.
    • To disable all the controls in a <fieldset>, add the disabled attribute. The <input><select>, and <button> elements of native form controls contained within a fieldset <disabled> are all treated by browsers as disabled, preventing keyboard and mouse interactions with them.
    • If form has custom button-like elements, such as <a class=”btn btn-*”>…</a>, they have pointer-events: none set, whichmeans they are still focusable and keyboard-operable. To prevent them from receiving focus use tabindex=”-1″ and use aria-disabled=”disabled” to signal their state to assistive technologies.
    https://www.tutorialspoint.com/bootstrap/examples/form_disabled_form.php

    Example

    You can edit and try running this code using Edit & Run option.

    <!DOCTYPE html><html lang="en"><head><title>Bootstrap - Form</title><meta charset="UTF-8"><meta http-equiv="X-UA-Compatible" content="IE=edge"><meta name="viewport" content="width=device-width, initial-scale=1.0"><link href="https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css" rel="stylesheet"><script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/js/bootstrap.bundle.min.js"></script></head><body><form><fieldset disabled><div class="mb-3"><label for="disabledEmailInput" class="form-label">Disabled Input</label><input type="text" id="disabledEmailInput" class="form-control" placeholder="Email Id"></div><div class="mb-3"><label for="disabledPasswordInput" class="form-label">Disabled Input</label><select id="disabledPasswordInput" class="form-select"><option>Password</option></select></div><div class="mb-3"><div class="form-check"><input class="form-check-input" type="checkbox" id="disabledcheckbox" disabled><label class="form-check-label" for="disabledcheckbox">
    
              Disabled Check Box
            &lt;/label&gt;&lt;/div&gt;&lt;/div&gt;&lt;button type="submit" class="btn btn-primary"&gt;Disabled Button&lt;/button&gt;&lt;/fieldset&gt;&lt;/form&gt;&lt;/body&gt;&lt;/html&gt;</pre>

    Accessibility

    • Every form control has an appropriate accessible name for assistive technology users. Using label elements or descriptive text within <button>...</button>is the simplest method to achieve this.
    • When a visible "label" or appropriate text content is not provided then use other approaches for accessible names, for example:
      • Use the class .visually-hidden to hide the <label> elements.
      • Use aria-labelledby to pointing an existing element that behaves as a <label>.
      • Including a title attribute.
      • Use aria-label to set the element accessible name.
    • When none of these are available, for accessible name assistive technology use placeholder attribute as a fallback on <input> and <textarea> elements.
    • Using visually hidden content will help assistive technology users, however certain users may still have issues with a lack of visible label text.
  • setInterval() Method

    JavaScript setInterval() Method

    In JavaScript, the setInterval() is a window method that is used to execute a function repeatedly at a specific interval. The setTimeout() Method allows you to execute the function only once after the specified time.

    The window object contains the setInterval() method. However, you can execute the setInterval() Method without taking the window object as a reference.

    Syntax

    Following is the syntax to use the setInterval() method in JavaScript

    setInterval(callback,interval, arg1, arg2,..., argN);

    The first two parameters are required others are optional.

    Parameters

    • Callback − It is a callback function that will be executed after every interval.
    • Interval − It is the number of milliseconds after the callback function should be executed.
    • Arg1, arg2, arg3, , argN − They are multiple arguments to pass to the callback function.

    Return Value

    The setInterval() method returns the numeric id.

    Example

    In the code below, the startTimer() function uses the setInterval() method to call the timer() function after every 1000 milliseconds.

    The timer() function increases the value of the second global variable every time it is called by the setInterval() method and prints the counter.

    You can click the button to start a timer in the output.

    Open Compiler

    <html><body><button onclick ="startTimer()">Start Timer</button><div id ="output"></div><script>let output = document.getElementById('output');var seconds =0;functionstartTimer(){setInterval(timer,1000);// Calls timer() function after every second}functiontimer(){// Callback function
    
         seconds++;
         output.innerHTML +="Total seconds are: "+ seconds +"&lt;br&gt;";}&lt;/script&gt;&lt;/body&gt;&lt;/html&gt;</code></pre>

    Output

    JavaScript setInterval() Method

    Arrow Function with setInterval() Method

    The below example contains almost the same code as the above example. Here, we have passed the arrow function as a first argument of the setInterval() method rather than passing the function name only. You can click the button to start the timer.

    Example

    Open Compiler

    <html><body><button onclick ="startTimer()">Start Timer</button><div id ="output"></div><script>let output = document.getElementById('output');var seconds =0;functionstartTimer(){setInterval(()=>{
    
            seconds++;
            output.innerHTML +="Total seconds are: "+ seconds +"&lt;br&gt;";},1000);// Calls timer() function after every second}&lt;/script&gt;&lt;/body&gt;&lt;/html&gt;</code></pre>

    Output

    Arrow Function with setInterval() Method

    Passing More than 2 Arguments to setInterval() Method

    In the code below, we have passed 3 arguments to the setInterval() method. The first argument is a callback function to print the date, the second argument is an interval, and the third argument will be passed to the callback function.

    Example

    Open Compiler

    <html><body><button onclick ="startTimer()">Start Date Timer</button><div id ="output"></div><script>let output = document.getElementById('output');var seconds =0;functionstartTimer(){let message ="The date and time is: ";setInterval(printDate,1000, message);}functionprintDate(message){let date =newDate();
    
         output.innerHTML += message + date +"&lt;br&gt;";}&lt;/script&gt;&lt;/body&gt;&lt;/html&gt;</code></pre>

    Output

    Passing More than 2 Arguments to setInterval() Method

    The clearInterval() Method in JavaScript

    The JavaScript clearInterval() method is used to stop the code execution started using the clearItnerval() method.

    It takes the numeric id returned by the setInterval () method as an argument.

    Syntax

    Follow the syntax below to use the clearInterval() method.

    clearInterval(id);

    Here id is an id returned by the setInterval() method.

    Example

    In the code below, we have used the setInterval() method to show the number after incrementing by 10 and after each second.

    When the number becomes 50, we stop the timer using the clearInterval() method.

    Open Compiler

    <html><body><div id ="output"></div><script>let output = document.getElementById('output');let number =10;let id =setInterval(()=>{if(number ==50){clearInterval(id);
    
            output.innerHTML +="The time is stopped."}
         output.innerHTML +="The number is: "+ number +"&lt;br&gt;";
         number +=10;},1000);&lt;/script&gt;&lt;/body&gt;&lt;/html&gt;</code></pre>

    Output

    The number is: 10
    The number is: 20
    The number is: 30
    The number is: 40
    The time is stopped.The number is: 50
    

    Real-time Use Case of the setInterval() Method

    In the above examples, we have printed the messages using the setInterval() method. In this section, we will see the real-time use cases of the setInterval() method.

    Here, we have listed some of the real-time use cases.

    • To refresh the date
    • For slideshow
    • For animation
    • To show the clock on the webpage
    • To update live cricket score
    • To update weather information
    • To run cron jobs

    Here are the real-time examples of the setInterval() method.

    Flipping the color of the HTML element after each interval

    In the code below, we flip the color of the <div> element after every second.

    We have defined the div element in the HTML body.

    In the <head> section, we have added the red and green classes. Also, we added the background color in the red and green classes.

    In JavaScript, we have passed the callback function as the first argument of the setInterval() method, which will be called after every 1000 milliseconds.

    We access the <div> element in the callback function using its id. After that, we check whether the classList of the <div> element contains the red class. If yes, we remove it and add the green class. Similarly, if classList contains the green class, we remove it and add the red class.

    This is how we are flipping the color of the <div> element using the setInterval() method.

    Example

    Open Compiler

    <html><head><style>.red {background-color: red;}.green {background-color: green;}
    
      #square {height:200px; width:200px;}&lt;/style&gt;&lt;/head&gt;&lt;body&gt;&lt;div&gt;Using setInterval() method to flip the color of the HTML element after each interval&lt;/div&gt;&lt;div id ="square"class="red"&gt;&lt;/div&gt;&lt;script&gt;let output = document.getElementById('output');setInterval(function(){let square = document.getElementById('square');if(square.classList.contains('red')){
            square.classList.remove('red');
            square.classList.add('green');}else{
            square.classList.remove('green');
            square.classList.add('red');}},1000);&lt;/script&gt;&lt;/body&gt;&lt;/html&gt;</code></pre>

    Output

    Flipping color of HTML element after each interval

    Moving Animation Using the setInterval() Method

    In the code below, we create moving animation using the setInterval() method.

    We have created the two nested div elements. The outer div has the parent id, and the inner div has the child id. We have set dimensions for both div elements and position to relative.

    In JavaScript, we have initialized the left variable with 0. After that, we invoke the callback function of the setInterval() method after every 50 milliseconds.

    In each interval, we change the position of the <div> element by 5px, and when the left position becomes 450px, we stop the animation.

    Example

    Open Compiler

    <html><head><style>
    
      #parent {
         position: relative; 
         height:50px;
         width:500px;
         background-color: yellow;}
      #child {
         position: relative; 
         height:50px;
         width:50px;
         background-color: red;}&lt;/style&gt;&lt;/head&gt;&lt;body&gt;&lt;div id ="parent"&gt;&lt;div id ="child"&gt;&lt;/div&gt;&lt;/div&gt;&lt;script&gt;let child = document.getElementById('child');let left =0;// Moving animation using the setInterval() methodsetInterval(()=&gt;{if(left &lt;450){
            left +=5;
            child.style.left = left +'px';}},50);&lt;/script&gt;&lt;/body&gt;&lt;/html&gt;</code></pre>

    Output

    Moving Animation Using setInterval() Method

    You can also use the setInterval() method to run the particular code asynchronously.

  • setTimeout() Method

    JavaScript setTimeout() Method

    In JavaScript, the setTimeout() is a global method that allows you to execute the function or a particular JavaScript code only once after a specified time.

    The window object contains the setTimeout() method. You may use the window object to execute the setTimeout() method.

    The setTimeout() method can also be used to manipulate the DOM elements after the specified time of the user interaction.

    Syntax

    The syntax of the setTimeout() method in JavaScript is as follows −

    window.setTimeout(callback, delay, param1, param2,..., paramN);ORsetTimeout(callback, delay, param1, param2,..., paramN);

    The setTimeout() method takes at least 2 parameters.

    Parameters

    • Callback − It is a callback function that will be called after a specific time. You can pass the arrow function, function expression, or regular expression as a value of this parameter.
    • delay − It is the number of milliseconds after that the callback function should be called. Here, 1 second is equal to 1000 milliseconds.
    • param1, param2, …, paramN − They are optional parameters to be passed as a callback function parameter.

    Return Value

    It returns the numeric id, which you can use to clear timeout.

    Example

    In the below code, we have defined the timeout() function, printing the message in the web page.

    We passed the timeout() function as the first argument of the setTimeout() method, and 1000 milliseconds as a second argument.

    The setTimeout() method will invoke the timeout() function after 1 second or 1000 milliseconds.

    Open Compiler

    <html><body><div id ="output"></div><script>
    
      document.getElementById('output').innerHTML ="Wait for a message! &lt;br&gt;";setTimeout(timeout,1000);functiontimeout(){
         document.getElementById('output').innerHTML +="This message is printed after 1 second!";}&lt;/script&gt;&lt;/body&gt;&lt;/html&gt;</code></pre>

    Output

    Wait for a message!
    This message is printed after 1 second!
    

    Arrow Function with setTimeout() Method

    In the below code, we have passed the arrow function as the first argument of the setTimeout() method. It works the same as passing the function name as an argument and defining the function outside.

    It prints the message after 2000 milliseconds.

    Example

    Open Compiler

    <html><body><div id ="output"></div><script>
    
      document.getElementById('output').innerHTML +="You will see the message after 2000 milliseconds! &lt;br&gt;";setTimeout(()=&gt;{
         document.getElementById('output').innerHTML +='Hi! How are you?';},2000);&lt;/script&gt;&lt;/body&gt;&lt;/html&gt;</code></pre>

    Output

    You will see the message after 2000 milliseconds!
    Hi! How are you?
    

    Passing More than 2 Arguments to setTimeout() Method

    You can pass more than 2 arguments to the setTimeout() method. The first argument is a callback function, the second argument is a delay in the milliseconds, and other arguments to pass to the function parameter.

    In the code below, we have passed 5 arguments to the setTimeout() method. In the sum() function, we received the last 3 arguments of the seetTimeOut() method as a parameter and summed them.

    Example

    Open Compiler

    <html><body><div>Wait for a sum of3 number.!</div><div id ="output"></div><script>setTimeout(sum,1000,10,20,30);functionsum(num1, num2, num3){let result = num1 + num2 + num3;
    
         document.getElementById('output').innerHTML ="Sum = "+ result;}&lt;/script&gt;&lt;/body&gt;&lt;/html&gt;</code></pre>

    Output

    Wait for a sum of 3 number.!
    Sum = 60
    

    Execute Code After Every N Seconds

    We created the counter using the setTimeout() method in the code below.

    We have defined the global variable p for the counter. In the counter() function, we print the counter value and use the setTimeout() method to call the counter function again after 1000 milliseconds.

    Example

    Open Compiler

    <html><body><div id ="output"></div><script>let output = document.getElementById('output');
    
      output.innerHTML +="The output of the counter is given below. &lt;br&gt;";var p =0;functioncounter(){
         output.innerHTML +="count is - "+ p +".&lt;br&gt;";setTimeout(counter,1000);
         p++;}counter();&lt;/script&gt;&lt;/body&gt;&lt;/html&gt;</code></pre>

    Output

    The output of the counter is given below.
    count is - 0.
    count is - 1.
    count is - 2.
    count is - 3.
    count is - 4.
    count is - 5.
    count is - 6.
    

    JavaScript clearTimeout() Method

    Sometimes, developers are required to cancel the time out before it executes the function or the JavaScript code. In such cases, you can use the clearTimeout() method.

    Syntax

    You can follow the syntax below to use the clearTimeout() method.

    clearTimeout(id);

    Parameters

    id − It is an id returned by the setTimeout() method to cancel it.

    Example

    In the below code, we have defined the startTimeOut() and stopTimeOut() functions, which will be called when users press the respective buttons.

    In the startTimeOut() function, we set the timeout of the 3 seconds and store the id returned by the setTimeout() method into the timeOut variable.

    In the stopTimeOut() function, we use the clearTimeout() method and pass the timeOut as an argument to clear the timeout.

    Open Compiler

    <html><body><p>Click the Stop timeout button within 3 seconds after pressing the Start timeout button.</p><button onclick ="startTimeOut()">Start Timeout</button><button onclick ="stopTimeOut()">Stop Timeout</button><p id ="output"></p><script>let output = document.getElementById('output');let timeout;functionstartTimeOut(){
    
         timeout =setTimeout(()=&gt;{
            output.innerHTML ="Timeout is done";},3000);}functionstopTimeOut(){clearTimeout(timeout);
         output.innerHTML ="Timeout is stopped";}&lt;/script&gt;&lt;/body&gt;&lt;/html&gt;</code></pre>

    Output

    JavaScript clearTimeout() Method

    Zero Delay SetTimeout

    The zero delay timeout means you call the setTimeout() method by passing the 0 milliseconds as an argument.

    As you pass the 0 milliseconds as an argument, it may or may not call the JavaScript code written into the callback function after 0 milliseconds. It totally depends on the pending tasks in the queue. Once the queue of tasks is completed, it will execute the code of the callback function.

    Now, the question is, what is the need of the zero delay timeout?

    Sometimes, you need to execute the particular JavaScript code as soon as possible once the script gets loaded into the browser. In such cases, you can use the setTimeout() method by passing 0 milliseconds as a second argument.

    Syntax

    Follow the syntax below to use the zero-delay timeout.

    setTimeout(callback,0);

    In the above syntax, we have passed the callback function as the first parameter and 0 milliseconds as the second parameter.

    Example

    In the code below, we add a start message, zero delay timeout message, and end message to the web page.

    In the output, you can see that it prints the start message. After that, it prints the end message and the zero delay timeout message. So, it executes the zero delay timeout code when the whole script gets loaded in the browser.

    Open Compiler

    <html><body><div id ="output"></div><script>let output = document.getElementById('output');
    
      output.innerHTML +="The code execution started. &lt;br&gt;";setTimeout(function(){
         output.innerHTML +="Inside the zero delay timeout. &lt;br&gt;";},0);
      output.innerHTML +="The code execution ended. &lt;br&gt;";&lt;/script&gt;&lt;/body&gt;&lt;/html&gt;</code></pre>

    Output

    The code execution started.
    The code execution ended.
    Inside the zero delay timeout.
    

    You can also recursively use the setTimeout() method, as shown in the example of the counter. Furthermore, you can also pass the anonymous function expression as a first parameter, like the arrow function. If you want to execute the particular JavaScript code, you can use the zero delay timeout once the whole script gets executed.

  • Timing Events

    What are the Timing Events?

    JavaScript timing events are used to execute the code after a specified time only once or multiple times. In JavaScript, you can use the timing events to control when a particular task should be executed.

    The ‘window’ object contains the various methods for timing events, which you can use to schedule the tasks. You can call these methods using the window object or without using it.

    Here is the list of methods that can be used to schedule the tasks.

    MethodDescription
    setTimeout()To execute the code after N number of milliseconds only once.
    clearTimeout()To clear the timeout, which was set using the setTimeOut() method.
    setInterval()To execute the particular task after each N milliseconds.
    clearInterval()To clear the interval, which was set using the setInterval() method.

    Let’s understand the timing events via the example below.

    The setTimeout() Method

    <html>
    <body>
       <div id = "output">The message will be printed after 2000 milliseconds! <br></div>
       <script>
    
      setTimeout(() =&gt; {
         document.getElementById('output').innerHTML += 'Hello World &lt;br&gt;';
      }, 2000);
    </script> </body> </html>

    Output

    The message will be printed after 2000 milliseconds!
    Hello World
    

    The clearTimeout() Method

    In the below example, we used the setTimeout() method to print the ‘hello world’ after 3000 milliseconds. We used clearTimeout() method to prevent setTimeout() method to execute.

    Example

    <html>
    <body>
       <p>Message will print after 3 seconds.</p>
       <p>Click the button to prevent timeout to execute.</p>
       <p id="demo"></p>
       <button onclick="stop()">Clear Timeout</button>
       <script>
    
      const myTimeout = setTimeout(greet, 3000);
      function greet() {
         document.getElementById("demo").innerHTML = "Hello World!"
      }
      function stop() {
         clearTimeout(myTimeout);
      }
    </script> </body> </html>

    Output

    The clearTimeout() Method

    The setInterval() and clearInterval() Methods

    In the code below, we have used the setInterval() method to show the number after incrementing by 10 and after each second.

    When the number becomes 50, we stop the timer using the clearInterval() method.

    Example

    <html>
    <body>
       <div id = "output"> </div>
       <script>
    
      let output = document.getElementById('output');
      let number = 10;
      let id = setInterval(() =&gt; {
         if (number == 50) {
            clearInterval(id);
            output.innerHTML += "The time is stopped."
         }
         output.innerHTML += "The number is: " + number + "&lt;br&gt;";
         number += 10;
      }, 1000);
    </script> </body> </html>

    Output

    The number is: 10
    The number is: 20
    The number is: 30
    The number is: 40
    The time is stopped.The number is: 50
    

    Real-time Use Cases of Timing Events

    Here, you will learn the real-time use cases of the timing events.

    • For animation and transition
    • For slideshow and carousel
    • For countdown timers
    • For user authentication timeouts
    • To autosave drafts like Google docs
    • To schedule notifications, email, message, etc.
    • To terminate the session as like banking websites
    • For progress bar

    However, there are other use cases also. You can use the setTimeOut() or setInterval() methods to achieve the above functionalities.

    Whats Next?

    In the following chapters, you will learn setTimeOut() and setInterval() methods in detail.

  • Promises Chaining

    The promise chaining in JavaScript can handle multiple related asynchronous operations even with a single promise. While a single promise handles a single asynchronous operation, the promise chaining allows you to create a sequence of promises. Here success or rejection of one promise triggers the execution of the next promise. This enables you to handle multiple asynchronous operations.

    In JavaScript, we can produce the promise code using the Promise() constructor and consume using the then() method. It handles the single asynchronous operation. To handle the multiple asynchronous operations, we require to use the multiple promises, as shown in the example below.

    Example

    In the code below, we have defined promise1, which gets resolved in 1 second. Also, we have defined the global data variable.

    After that, we used the then() method to consume the promise1, and inside the callback function, we stored the return value from the promise in the data.

    Next, we have defined the promise2, which gets resolved after 2 seconds. Next, we used the then() method with promise2 and used the data variable inside the callback function.

    Open Compiler

    <html><body><div id ="output"></div><script>let output = document.getElementById("output");var data;// First promiselet promise1 =newPromise((resolve, reject)=>{setTimeout(()=>{resolve(10);},1000);});
    
      promise1.then((value)=&gt;{
         data = value;// Stroing value into the data
         output.innerHTML +="The promise1 is resolved and data is: "+ data +"&lt;br&gt;";});// Second promiselet promise2 =newPromise((resolve, reject)=&gt;{setTimeout(()=&gt;{resolve(20);},2000);});
      promise2.then((value)=&gt;{
         data = data * value;// Using the data from the first promise
         output.innerHTML +="The promise2 is resolved and data is: "+ value +"&lt;br&gt;";
         output.innerHTML +="The final value of the data is: "+ data +"&lt;br&gt;";});&lt;/script&gt;&lt;/body&gt;&lt;/html&gt;</code></pre>

    Output

    The promise1 is resolved and data is: 10
    The promise2 is resolved and data is: 20
    The final value of the data is: 200
    

    In the above example, we have created two different promises to perform multiple operations on the data returned from the promise1.

    It increases the code complexity and decreases the readability.

    Here, promise chaining comes into the picture.

    JavaScript Promise Chaining

    The concept of promise chaining in JavaScript allows you to do multiple related asynchronous operations with a single promise.

    You can use the multiple then() methods while consuming the promise to perform the multiple asynchronous operations.

    Syntax

    The syntax of the promise chaining in JavaScript is as follows −

    Promise
       .then(callback);.then(callback);....then(callback);

    In the above syntax, we have used multiple then() methods to handle the multiple asynchronous operations. Each then() method executes the single callback function.

    Example

    In the code below, we have defined the promise1. After that, we used the promise chain to perform the multiple asynchronous operations.

    From the first then() method, we return the value after multiplying with 2. In the next then() method, we print the updated value and return the new value after multiplying the old value with 2. Similarly, the operation we are doing is in the third then() method.

    Open Compiler

    <div id ="output"></div><script>let output = document.getElementById("output");const promise1 =newPromise((resolve, reject)=>{resolve(2);});// Promise chaining
    
      promise1
      .then((value)=>{
         output.innerHTML ="The square of 2 is "+ value *2+"&lt;br>";return value *2;// Returning a promise for next then() method}).then((value)=>{
         output.innerHTML +="The square of 4 is "+ value *2+"&lt;br>";return value *2;}).then((value)=>{
         output.innerHTML +="The square of 8 is "+ value *2+"&lt;br>";});&lt;/script>&lt;/body>&lt;/html></code></pre>

    Output

    The square of 2 is 4
    The square of 4 is 8
    The square of 8 is 16
    

    Multiple Promise Handlers

    You can also use the multiple promise handlers to consume the single promise. However, if you use multiple promise handlers, it is not called promise chaining.

    Example

    In the code below, we have created the promise1.

    After that, we used the multiple promise handlers to consume the promise. Each promise handler solves the promise separately.

    Open Compiler

    <html><body><div id ="output"></div><script>let output = document.getElementById("output");const promise1 =newPromise((resolve, reject)=>{resolve(2);});
    
    
      promise1
      .then((value)=&gt;{
         output.innerHTML +="Inside the first promise handler. &lt;br&gt;";return value *2;})
      promise1
      .then((value)=&gt;{
         output.innerHTML +="Inside the second promise handler. &lt;br&gt;";return value *2;})
      promise1
      .then((value)=&gt;{
         output.innerHTML +="Inside the third promise handler. &lt;br&gt;";return value *2;})&lt;/script&gt;&lt;/body&gt;&lt;/html&gt;</code></pre>

    Output

    Inside the first promise handler.
    Inside the second promise handler.
    Inside the third promise handler.
    

    Error Handling with Promise Chaining

    You can use the catch() method with promise chaining to handle the error.

    If you use the catch() method at last after all then() methods, it catches the error in any then() method and handles it. If you use the catch() method in between then() methods, it catches the error in the then() methods used before it.

    Lets understand it via the example below.

    Example

    In the code below, we have defined the promise and rejected it.

    After that, we used the promise chaining to consume the promise. We used two then() methods and 1 catch() after all then() methods.

    In the output, you can see that as we rejected the promise, control goes into the catch() method.

    Open Compiler

    <html><body><div id ="output"></div><script>let output = document.getElementById("output");const promise1 =newPromise((resolve, reject)=>{reject("There is an error.");});
    
    
      promise1
      .then((value)=&gt;{
         output.innerHTML +="The returned value is: "+ value +"&lt;br /&gt;";return value +" Everything is fine!";}).then((value)=&gt;{
         output.innerHTML += value;}).catch((error)=&gt;{
         output.innerHTML += error;});&lt;/script&gt;&lt;/body&gt;&lt;/html&gt;</code></pre>

    Output

    There is an error.
    

    Returning the Promise

    When you return the value from the then() method, it returns the promise by default and resolves it with a returned value, as it is an asynchronous method.

    However, you can manually return the promise to reject the promise or perform any other operation.

    Example

    In the code below, we have defined the primise1 and used the setTimeOut() method inside the callback function.

    After that, we consume the promise using multiple then() methods. From each then() method, we return a new promise.

    When you return only the value from the then() method, it returns the promise, which gets resolved immediately. But when you want to add some delay, you can return the promise from then() method.

    Open Compiler

    <html><body><div id ="output"></div><script>let output = document.getElementById("output");const promise1 =newPromise((resolve, reject)=>{setTimeout(()=>{resolve("Stage 1");},500);});
    
    
      promise1
      .then((value)=&gt;{
         output.innerHTML += value +"&lt;br /&gt;";returnnewPromise((resolve, reject)=&gt;{setTimeout(()=&gt;{resolve("Stage 2");},1000);});}).then((value)=&gt;{
         output.innerHTML += value +"&lt;br /&gt;";returnnewPromise((resolve, reject)=&gt;{setTimeout(()=&gt;{resolve("Stage 3");},200);});}).then((value)=&gt;{
         output.innerHTML += value +"&lt;br /&gt;";
         output.innerHTML +="Finished";})&lt;/script&gt;&lt;/body&gt;&lt;/html&gt;</code></pre>

    Output

    Stage 1
    Stage 2
    Stage 3
    Finished
    

    Converting Nested Callback Functions into the Promise Chaining

    You learned about the nested callback functions in the JavaScript-callbacks' chapter. It is also called the callback hell due to its complex syntax.

    Here, we will learn to convert the callback hell into the promise chaining to make it more readable.

    Lets look at the example of the nested callback functions.

    Nested Callback functions

    Example

    In the code below, the updateData() function takes the data as a first parameter and the callback function as a second parameter.

    The updateData() function calls the callback function by passing the data as an argument after 1000 milliseconds.

    Next, we have invoked the updateData() function and passed the 10 as a first argument and the anonymous function as a callback function.

    The callback function stores the resultant value into p after adding 1 to the num1 value.

    Next, we call the updateData() function inside the callback function. Also, we have passed the data and callback function as an argument. This way, we have defined the nested callback functions.

    Open Compiler

    <html><body><div id ="output"></div><script>let output = document.getElementById("output");
    
      output.innerHTML +="Wait for updating the data...&lt;br&gt;";//    Callback hellfunctionupdateData(data, callback){setTimeout(()=&gt;{callback(data);},1000);}updateData(10,function(num1){let p =1+ num1;updateData(30,function(num2){let q =1+ num2;updateData("The numeric value is: "+(p + q),function(answer){
               output.innerText += answer;});});});&lt;/script&gt;&lt;/body&gt;&lt;/html&gt;</code></pre>

    Output

    Wait for updating the data...
    The numeric value is: 42
    

    Now, lets learn to convert the above example into promise chaining.

    Converting nested callback functions to promise chaining

    Example

    In the code below, the updateData() function returns a single promise.

    After that, we used the promise chaining, an alternative to the callback hell defined in the above example.

    Open Compiler

    <html><body><div id ="output"></div><script>let output = document.getElementById("output");
    
      output.innerHTML +="Wait for updating the data...&lt;br&gt;";functionupdateData(data){returnnewPromise((resolve, reject)=&gt;{setTimeout(()=&gt;{resolve(data);},1000);});}updateData(10).then((num1)=&gt;{let p =1+ num1;returnupdateData(p);}).then((num2)=&gt;{let q =31;returnupdateData("The final value is: "+(num2 + q));}).then((res)=&gt;{
         output.innerText += res;});&lt;/script&gt;&lt;/body&gt;&lt;/html&gt;</code></pre>

    Output

    Wait for updating the data...
    The final value is: 42
    

    Real-time Examples of Promise Chaining

    In real-time development, you can use the promise chaining to fetch the data and perform the operations on the data.

    Example

    In the code below, when users click the fetch data button, it invokes the fetchData() function.

    In the fetchData() function, we have used the fetch() API to fetch data from the API.

    After that, we used the then() method to convert the data into JSON.

    Next, we used the then() method again to print the JSON data.

    Open Compiler

    <html><body><button onclick ="fetchData()"> Fetch Data </button><div id ="output"></div><script>let output = document.getElementById("output");functionfetchData(){fetch('https://jsonplaceholder.typicode.com/todos/1').then(response=> response.json())// Promise chaining.then((data)=>{
    
            output.innerHTML +="The data is - "+JSON.stringify(data);})}&lt;/script&gt;&lt;/body&gt;&lt;/html&gt;</code></pre>

    Output