Category: 04. Adversarial Search

https://cdn3d.iconscout.com/3d/premium/thumb/search-3d-icon-png-download-3379756.png

  • Alpha-Beta Pruning

    Alpha-beta pruning is a modified version of the minimax algorithm. It is an optimisation technique for the minimax algorithm.

    As we have seen in the minimax search algorithm, the number of game states it must examine is exponential in the depth of the tree. Since we cannot eliminate the exponent, we can cut it in half. Hence, there is a technique by which, without checking each node of the game tree, we can compute the correct minimax decision, and this technique is called pruning. This involves two threshold parameters, Alpha and Beta, for future expansion, so it is called alpha-beta pruning. It is also called the Alpha-Beta Algorithm.

    Alpha-beta pruning can be applied at any depth of a tree, and sometimes, it affects not only the tree leaves but also the entire sub-tree. The two-parameter can be defined as:

    1. Alpha: The best (highest-value) choice we have found so far at any point along the path of Maximiser. The initial value of alpha is -∞.
    2. Beta: The best (lowest-value) choice we have found so far at any point along the path of Minimiser. The initial value of beta is +∞.

    The Alpha-beta pruning to a standard minimax algorithm returns the same move as the standard algorithm does, but it removes all the nodes that are not really affecting the final decision, making the algorithm slow. Hence, by pruning these nodes, it makes the algorithm faster.

    Note: To better understand this topic, kindly study the minimax algorithm.

    Conditions for Alpha-beta Pruning

    The main condition required for alpha-beta pruning is:

    α>=β

    Key Points about Alpha-beta Pruning:

    • The Max player will only update the value of alpha.
    • The Min player will only update the value of beta.
    • While backtracking the tree, the node values will be passed to the upper nodes instead of the values of alpha and beta.
    • We will only pass the alpha and beta values to the child nodes.

    Pseudo-code for Alpha-beta Pruning

    1. function minimax(node, depth, alpha, beta, maximizingPlayer) is    
    2. if depth ==0 or node is a terminal node, then    
    3. return static evaluation of node    
    4.     
    5. if MaximizingPlayer then      // for Maximiser Player    
    6.    maxEva= -infinity              
    7.    for each child of node do    
    8.    eva= minimax(child, depth-1, alpha, beta, False)    
    9.   maxmaxEva= max(maxEva, eva)     
    10.   alpha= max(alpha, maxEva)        
    11.    if beta<=alpha    
    12.  break    
    13.  return maxEva    
    14.       
    15. else                         // for Minimiser player    
    16.    minEva= +infinity     
    17.    for each child of node do    
    18.    eva= minimax(child, depth-1, alpha, beta, true)    
    19.    minminEva= min(minEva, eva)     
    20.    beta= min(beta, eva)    
    21.     if beta<=alpha    
    22.   break            
    23.  return minEva    

    Working of Alpha-Beta Pruning

    Let’s take an example of a two-player search tree to understand the working of Alpha-beta pruning:

    Step 1: At the first step, the Max player will start the first move from node A, where α= -∞ and β= +∞; these values of alpha and beta are passed down to node B, where again α= -∞ and β= +∞, and Node B passes the same value to its child D.

    Alpha-Beta Pruning

    Step 2: At Node D, the value of α will be calculated as its turn for Max. The value of α is compared with firstly 2 and then 3, and the max (2, 3) = 3 will be the value of α at node D, and the node value will also be 3.

    Step 3: Now the algorithm backtracks to node B, where the value of β will change as this is a turn of Min. Now β= +∞will compare with the available subsequent nodes’ value, i.e., min (∞, 3) = 3, hence at node B, now α= -∞, and β= 3.

    Alpha-Beta Pruning

    In the next step, the algorithm traverses the next successor of Node B, which is Node E, and the values of α= -∞and β= 3 will also be passed.

    Step 4: At node E, Max will take its turn, and the value of α will change. The current value of α will be compared with 5, so max (−∞, 5) = 5, hence at node E α = 5 and β = 3, where α>=β, so the right successor of E will be pruned, and the algorithm will not traverse it, and the value at node E will be 5.

    Alpha-Beta Pruning

    Step 5: At the next step, the algorithm again backtracks the tree from node B to node A. At node A, the value of alpha will be changed to the maximum available value of 3 as max (-∞, 3)= 3, and β= +∞; these two values now pass to the right successor of A, which is Node C.

    At node C, α=3 and β= +∞, and the same values will be passed on to node F.

    Step 6: At node F, again, the value of α will be compared with the left child, which is 0, and max(3,0)= 3, and then compared with the right child, which is 1, and max(3,1)= 3 still α remains 3, but the node value of F will become 1.

    Alpha-Beta Pruning

    Step 7: Node F returns the node value 1 to node C, at C α = 3 and β = +∞; here, the value of β will be changed, and it will compare with 1, so min (∞, 1) = 1. Now at C, α = 3 and β = 1, and again it satisfies the condition α>=β, so the next child of C, which is G, will be pruned, and the algorithm will not compute the entire sub-tree G.

    Alpha-Beta Pruning

    Step 8: C now returns the value of 1 to A. Here, the best value for A is max (3, 1) = 3. Following is the final game tree, which shows the nodes that are computed and the nodes that have never been calculated. Hence, the optimal value for the maximiser is 3 for this example.

    Alpha-Beta Pruning

    Move Ordering in Alpha-Beta Pruning

    The effectiveness of alpha-beta pruning is highly dependent on the order in which each node is examined. Move order is an important aspect of alpha-beta pruning.

    It can be of two types:

    1. Worst ordering: In some cases, the alpha-beta pruning algorithm does not prune any of the leaves of the tree and works exactly as the minimax algorithm. In this case, it also consumes more time because of alpha-beta factors, such as a move of pruning, which is called worst ordering. In this case, the best move occurs on the right side of the tree. The time complexity for such an order is O(bm).
    2. Ideal ordering: The ideal ordering for alpha-beta pruning occurs when lots of pruning happens in the tree, and the best moves occur at the left side of the tree. We apply DFS. Hence, it first searches the left of the tree and goes deep twice as the minimax algorithm does in the same amount of time. Complexity in ideal ordering is O(bm/2).

    Rules to Find a Good Ordering

    The following are some rules to find good ordering in alpha-beta pruning:

    • Occur the best move from the shallowest node.
    • Order the nodes in the tree such that the best nodes are checked first.
    • Use domain knowledge while finding the best move. Ex: for Chess, try order: captures first, then threats, then forward moves, backward moves.
    • We can bookkeep the states, as there is a possibility that states may repeat.

    Applications of Alpha-Beta Pruning

    Usage of AI for Board Games

    • Chess: Stockfish uses techniques like Alpha Beta Pruning. This prune branches that will not affect the final decision, so it allows the evaluation of millions of possible board configurations in a reasonable time. An example is if a sequence of some moves that are already known to lead to a loss is provided, the algorithm will skip going further.
    • Checkers: In Checkers, the algorithm must evaluate possible moves with the purpose of maximizing the AI score and minimizing the chance of the opponent. Alpha Beta Pruning avoids unnecessary exploration, and this reduces the planning time for a deep strategy.
    • Tic-Tac-Toe: Even though the game is much simpler, Alpha-Beta Pruning efficiently figures out which moves are best by pruning paths that end in draws or losses before reaching the end of the decision process.

    Decision-Making in Adversarial Search Problems

    • Security Systems: In the area of cybersecurity, Alpha-Beta Pruning is used in intrusion detection systems for decision-making. Thus, it can act as a model of adversarial behavior as well as determine optimal strategies to counter threats while minimizing system vulnerabilities.
    • Economic Modelling: The alpha-beta Pruning technique helps companies in competitive markets to simulate adversarial scenarios to derive the best strategies, maximizing their profit and decreasing the impact of competitor moves.
    • Robotics and Automation: Such pruning is used by robots to decide efficient paths and responses when using adversarial planning for tasks, such as competitive tasks or navigation in dynamic environments.

    Enhancements in Real-Time Strategy Games

    • Combat Simulations: As an example, in RTS games that most resemble StarCraft, the possible attack and defence strategies are evaluated using Alpha-Beta Pruning. This allows the AI to predict an opponent’s move and take actions for the best in a situation and to gain an advantage over the opponent.
    • Resource Management: In the case of RTS games, the decisions are related to allocating resources; for instance, should I spend my time amassing more resources or building armies? Alpha Beta pruning allows AI to find its way to balance these competing objectives at a greatly more economical tradeoff.
    • Dynamic Scenarios: RTS games are different from turn-based games because, in RTS games, there is always continuous play, and player decisions must change in real-time. Real-time execution of Alpha Beta pruning, however, requires a minor adaptation to make it work and is critical for pruning away the unnecessary options in its considered moves so as to keep the computational efficiency.

    Advantages and Disadvantages of Alpha-Beta Pruning

    Advantages

    • Enhanced Efficiency Compared to Plain Minimax: Alpha-Beta Pruning can considerably enhance the performance of the Minimax algorithm by drilling down the decision tree from an increasing number of nodes. The efficiency comes from pruning branches that do not affect the result. This means that the algorithm is only interested in the most promising paths, and since there can be many paths to the goal, this speeds up decision making in such cases as chess or checkers.
    • Applicability to Large Decision Trees: This algorithm is especially useful for large search space games and decision problems. Removing unneeded branches permits deeper searching of the tree for the same computational budget. The depth advantage is very important for making better decisions, primarily in a competitive environment where precise foresight is essential.

    Disadvantages

    • Dependency on the Order of Node Evaluation: As an order controls the ordering of nodes in Alpha Beta Pruning, its efficiency depends greatly on that order. Therefore, if the algorithm processes the most promising moves first, so-called “node ordering”, then the pruning effect is at its maximum. However, node ordering can also harm computational efficiency negatively if it results in suboptimal pruning. Consequently, proper implementation with heuristics or preprocessing is typically necessary to obtain optimal performance.
    • Computational Overhead for Deep and Complex Trees: Alpha-Beta Pruning reduces the exploration of the nodes but cannot be efficient on highly complex trees with extremely high depth. In particular, the algorithm may still require an amount of computational effort that is at least as high as in the worst case if no good heuristics for ordering nodes are available. However, when the problem domain calls for such methods, Monte Carlo Tree Search may be a more appropriate way to solve such problems.
  • Mini-Max Algorithm in AI

    • Mini-max algorithm is a recursive or backtracking algorithm which is used in decision-making and game theory. It provides an optimal move for the player assuming that opponent is also playing optimally.
    • Mini-Max algorithm uses recursion to search through the game-tree.
    • Min-Max algorithm is mostly used for game playing in AI. Such as Chess, Checkers, tic-tac-toe, go, and various tow-players game. This Algorithm computes the minimax decision for the current state.
    • In this algorithm two players play the game, one is called MAX and other is called MIN.
    • Both the players fight it as the opponent player gets the minimum benefit while they get the maximum benefit.
    • Both Players of the game are opponent of each other, where MAX will select the maximized value and MIN will select the minimized value.
    • The minimax algorithm performs a depth-first search algorithm for the exploration of the complete game tree.
    • The minimax algorithm proceeds all the way down to the terminal node of the tree, then backtrack the tree as the recursion.

    Pseudo-code for MinMax Algorithm:

    1. function minimax(node, depth, maximizingPlayer) is  
    2. if depth ==0 or node is a terminal node then  
    3. return static evaluation of node  
    4.   
    5. if MaximizingPlayer then      // for Maximizer Player  
    6. maxEva= -infinity            
    7.  for each child of node do  
    8.  eva= minimax(child, depth-1, false)  
    9. maxEva= max(maxEva,eva)        //gives Maximum of the values  
    10. return maxEva  
    11.   
    12. else                         // for Minimizer player  
    13.  minEva= +infinity   
    14.  for each child of node do  
    15.  eva= minimax(child, depth-1, true)  
    16.  minEva= min(minEva, eva)         //gives minimum of the values  
    17.  return minEva  

    Initial call:

    Minimax(node, 3, true)

    Working of Min-Max Algorithm:

    • The working of the minimax algorithm can be easily described using an example. Below we have taken an example of game-tree which is representing the two-player game.
    • In this example, there are two players one is called Maximizer and other is called Minimizer.
    • Maximizer will try to get the Maximum possible score, and Minimizer will try to get the minimum possible score.
    • This algorithm applies DFS, so in this game-tree, we have to go all the way through the leaves to reach the terminal nodes.
    • At the terminal node, the terminal values are given so we will compare those value and backtrack the tree until the initial state occurs. Following are the main steps involved in solving the two-player game tree:

    Step-1: In the first step, the algorithm generates the entire game-tree and apply the utility function to get the utility values for the terminal states. In the below tree diagram, let’s take A is the initial state of the tree. Suppose maximizer takes first turn which has worst-case initial value =- infinity, and minimizer will take next turn which has worst-case initial value = +infinity.

    Mini-Max Algorithm in AI

    Step 2: Now, first we find the utilities value for the Maximizer, its initial value is -∞, so we will compare each value in terminal state with initial value of Maximizer and determines the higher nodes values. It will find the maximum among the all.

    • For node D         max(-1,- -∞) => max(-1,4)= 4
    • For Node E         max(2, -∞) => max(2, 6)= 6
    • For Node F         max(-3, -∞) => max(-3,-5) = -3
    • For node G         max(0, -∞) = max(0, 7) = 7
    Mini-Max Algorithm in AI

    Step 3: In the next step, it’s a turn for minimizer, so it will compare all nodes value with +∞, and will find the 3rd layer node values.

    • For node B= min(4,6) = 4
    • For node C= min (-3, 7) = -3
    Mini-Max Algorithm in AI

    Step 4: Now it’s a turn for Maximizer, and it will again choose the maximum of all nodes value and find the maximum value for the root node. In this game tree, there are only 4 layers, hence we reach immediately to the root node, but in real games, there will be more than 4 layers.

    • For node A max(4, -3)= 4
    Mini-Max Algorithm in AI

    That was the complete workflow of the minimax two player game.

    Properties of Mini-Max algorithm:

    • Complete- Min-Max algorithm is Complete. It will definitely find a solution (if exist), in the finite search tree.
    • Optimal- Min-Max algorithm is optimal if both opponents are playing optimally.
    • Time complexity- As it performs DFS for the game-tree, so the time complexity of Min-Max algorithm is O(bm), where b is branching factor of the game-tree, and m is the maximum depth of the tree.
    • Space Complexity- Space complexity of Mini-max algorithm is also similar to DFS which is O(bm).

    Limitation of the minimax Algorithm:

    The main drawback of the minimax algorithm is that it gets really slow for complex games such as Chess, go, etc. This type of games has a huge branching factor, and the player has lots of choices to decide. This limitation of the minimax algorithm can be improved from alpha-beta pruning which we have discussed in the next topic.

  • Adversarial Search

    Adversarial search is a subfield of artificial intelligence that aims at identifying algorithms and strategies involved in decision-making processes in environments where an individual or a group of players with different aspirations develop strategies to counter each other during play, business, etc. Adversarial search, therefore, aims at identifying suitable moves for a player and consider the probable responses from the opponents.

    The purpose of the adversarial search is to develop strategies that can enable an agent to select the most effective action in situations where agents are competing or in conflict with each other, as well as predicting an action of an opponent and anticipating a counteraction. In order to determine the best next move or decision to make, most adversarial search algorithms traverse a game tree that defines all the possible game states and their transitions.

    In general, adversarial search can be regarded as a very challenging and promising area of AI research that requires an understanding of game theory as well as decision-making and optimization concepts such as mixed strategies. It finds applications in many fields and is still one of the active areas of research in the context of AI.

    Importance of Adversarial Search in AI

    Analyzing the Adversarial search concept, it is necessary to note that it plays an essential role in artificial intelligence. Interestingly, it is of great importance in two important areas:

    Game-Playing: One of the key areas where adversarial search is applied is games. From Chess, checkers, and Go to complex video games, AI agents make use of adversarial search to analyze and decide on the right moves in a nutshell competition. One of the interesting aspects of AI that can be applied in the game is the capacity to defeat the competitor. For instance, Deep Blue – a computer developed by IBM that can play Chess against a human- was recently able to beat the world chess champion Garry Kasparov in 1997 as a result of what is known as adversarial search.

    Decision-Making: Apart from games, the application of adversarial search occurs in any decision-making process. It is applicable where the objective of the individuals varies, and they have to find the most optimal solution. It is useful in other fields, such as economics, robotics, or even military planning and strategy, as the agents have to factor in their decisions based on the actions and goals of the opponents. Adversarial search equips organizations with AI tools and methodologies to solve problems in settings that may be complicated, dynamic, uncertain, and even sometimes hostile.

    Different Game Scenarios using Adversarial Search

    Perfect information: A game with the ideal information is one in which agents can look into the complete board. Agents have all the information about the game, and they can see each other’s moves. Examples are Chess, Checkers, Go, etc.

    Imperfect information: If in a game, agents do not have all the information about the game and are not aware of what’s going on, such types of games are called games with imperfect information, such as tic-tac-toe, Battleship, blind, Bridge, etc.

    Deterministic games: Deterministic games follow a strict pattern and set of rules for the games, and there is no randomness associated with them. Examples are Chess, Checkers, Go, Tic-Tac-Toe, etc.

    Non-deterministic games: Non-deterministic games have various unpredictable events and involve a factor of chance or luck. Either dice or cards introduce this factor of chance or luck. These are random, and each action response is not fixed. Such games are also called stochastic games. Examples are Backgammon, Monopoly, Poker, etc.

    Zero-sum games: These are exclusively competitive games in which the enhancement of one player’s position is equal to a decrement in the position of another. Each of the players in these games will have different strategies across the opposition, and the net gain or loss is zero. Each person always seeks to achieve the maximum amount of profit or minimize the amount of loss with regard to the context of the game. Chess and tic-tac-toe are examples of a Zero-sum game.

    Zero-sum Game: Embedded Thinking

    The Zero-sum game involved embedded thinking in which one agent or player is trying to figure out:

    • What to do.
    • How to decide the move
    • Needs to think about his opponent as well
    • The opponent also thinks about what to do
    • Each of the players is trying to find out the response of their opponent to their actions. This requires embedded thinking or backward reasoning to solve the game problems in AI.

    Formalization of the problem:

    A game can be defined as a type of search in AI that involves the following elements:

    • Initial state: It specifies how the game is set up at the start.
    • Player(s): It specifies which player has moved in the state space.
    • Action(s): It returns the set of legal moves in the state space.
    • Result(s, a): It is the transition model, which specifies the result of moves in the state space.
    • Terminal-Test(s): The terminal test is true if the game is over; otherwise, it is false in any case. The state where the game ends is called a terminal state.
    • Utility(s, p): A utility function gives the final numeric value for a game that ends in terminal states s for player p. It is also called the payoff function. For Chess, the outcomes are a win, a loss, or a draw, and its payoff values are +1, 0, and ½. And for tic-tac-toe, utility values are +1, -1, and 0.

    Game Tree

    A game tree is a tree where nodes of the tree are the game states, and Edges of the tree are the moves by players. The game tree involves the initial state, action function, and result function.

    It has several nodes, the highest of which is the Root node. Every node stands for the current position of the game, and the latter is indicated at the edge of the node. The attempts of the two players, referred to as Maximizer and Minimizer, are included in turns on each layer of the tree. Minimizer keeps the loss and minimizes the maximum amount of loss, while Maximizer increases the minimum amount of gain. Depending on the context of the game and the moves that the other player has created, a player becomes the Maximizer or the Minimizer..

    Example: Tic-Tac-Toe Game Tree

    The following figure shows part of the game tree for the tic-tac-toe game. Following are some key points of the game:

    • There are two players, MAX and MIN.
    • Players have an alternate turn and start with MAX.
    • MAX maximizes the result of the game tree
    • MIN minimizes the result.
    Adversarial Search

    Example Explanation:

    • From the initial state, MAX has nine possible moves as it starts first. MAX places ‘X’ and MIN places ‘O’, and both players play alternatively until we reach a leaf node where one player has three in a row or all squares are filled.
    • Both players will compute each node, minimax, the minimax value which is the best achievable utility against an optimal adversary.
    • Suppose both the players are well aware of the tic-tac-toe and playing the best play. Each player is doing their best to prevent another one from winning. MIN is acting against Max in the game.
    • So, in the game tree, we have a layer of Max and a layer of MIN, and each layer is called Ply. Max places ‘X’, then MIN puts ‘O’ to prevent Max from winning, and this game continues until the terminal node.
    • This is the situation where MIN wins, MAX wins, or it’s a draw. This game tree is the whole search space of possibilities where MIN and MAX are playing tic-tac-toe and taking turns alternately.

    The Minimax Algorithm in Adversarial Search

    It is one of the most central ideas in adversarial search since it is most suitable for two-player games. It assists in decision-making since it presupposes that each player is making their best decision. The algorithm operates to establish the least amount that can be lost in the worst-case situation considered.

    • Game Tree Construction: The algorithm builds a game tree with nodes representing game states and edges representing possible moves.
    • Maximizer and Minimizer: The algorithm alternates between two players: the Maximizer, who tries to maximize the score, and the Minimizer, who tries to minimize it.
    • Evaluation: At the terminal nodes of the tree, a utility function evaluates the game’s outcome. The algorithm backtracks from these terminal nodes to determine the optimal move for the current player.

    Introduction to Alpha-Beta Pruning

    Alpha-Beta Pruning is a search optimization method of the Minimax algorithm that helps eliminate most of the nodes from the game tree. It aids in the optimization of the algorithm by removing the so-called “subtrees” that do not require further examination because none of the paths under them can lead to choosing one over the other.

    • Alpha and Beta values: Alpha is the value that is best for the Maximizer at that level and above, while Beta is the best value that the Minimizer can offer at that level and above.
    • Pruning: If a node has an evaluation less than alpha or Beta, it is unnecessary to continue assessing the node’s descendants, and the search is pruned.
    • Speed: By reducing the number of nodes expanded, Alpha-Beta Pruning is able to find an optimal solution quickly than the use of the simple Minimax algorithm.

    How Alpha-Beta Pruning Reduces Node Exploration?

    Alpha-Beta Pruning greatly limits the use of the game tree by not expanding nodes that have no impact on the outcome.

    • Early Termination: These branches that can never have an impact on the final result are pruned as early as possible so as to save unnecessary computation.
    • Reduced Complexity: The complexity of the search process is reduced from exponential to polynomial in many cases, improving performance.
    • Optimal Decisions: Even though Alpha-Beta Pruning involves the elimination of certain nodes, optimal decisions are still made because only unnecessary calculations are omitted.

    Real-World Applications of Adversarial Search beyond Games

    It is therefore important to stress that Minimax and Alpha-Beta Pruning are not necessarily limited to games but have a wider use:

    • Strategic Planning: This kind of planning is essential wherever strategic decision-making is needed, for instance, in the military or the placement of resources.
    • Robotics: It can be used in robotics where it is needed in path planning and decision-making in competitive domains.
    • Economics: Used in the context of the model where agents or actors are struggling for resources or market stakes.
    • Cybersecurity: Employed in adversarial scenarios such as threat detection and mitigation strategies against cyber-attacks.

    Important Features of Adversarial Search

    Adversarial search is an important area in artificial intelligence. This is about making decisions in hostile circumstances. Below are some major aspects of adversarial search:

    Perfect or Imperfect Information

    In games, there are two major categories of information that players can access: perfect and imperfect. In games with perfect information, the players all have full knowledge of the current state of a match. However, in games with imperfect information, players are not presented with all the details.

    Adversarial Search Algorithms

    As in any competitive game like Chess, a contender can use a technique known as the min-max strategy or alpha-beta pruning in order to understand the best move. Such algorithms calculate the possible results of each move and provide hints that tell what platform is appropriate for the player.

    Thumb Rule

    The tree can get rather large, and it may, therefore, mean that the full search of the tree cannot be completed. In such situations, the algorithm uses heuristics, i.e., shortcut rules, enabling it to move faster over the game tree, bypassing the evaluation of all of the possible moves. These small rules give an idea of the best possible move to make without actually traversing the entire game tree.

    Challenges in Adversarial Search

    There are several disadvantages of the adversarial search:

    Computational Complexity

    • Exponential Growth: The problem is also that with the increase in the number of moves, the overall size of the game tree increases, and it may become practically impossible to complete it to the end.
    • Memory Usage: It also has its demerit of requiring a huge amount of memory to store large game trees.

    Heuristic Evaluation

    • Accuracy: It was seen that even when a proper heuristic evaluation function is designed, it may not actually represent the real game situation.
    • Scalability: Specifically, heuristic functions may not be very well suited for large problems and complex problem space

    Opponent Modeling

    • Unpredictability: Frequently, it is not easy to model and predict an opponent’s actions, particularly when the opponent operates in a complex environment.
    • Adaptability: It becomes even more challenging if one has to counter an opponent’s form of play, as this has to be done on the fly.

    Real-Time Decision Making

    • Time Constraints: Real-time decision-making within vast search spaces can be very difficult, depending on the time it takes to make a decision, with such concerns being compounded by time-related factors.
    • Optimality vs. Speed: Thus, making the best decisions and utilizing their time at the same time will be a big challenge.

    Conclusion

    Adversarial search, the essential technique in artificial intelligence, has been helping in game-solving and other critical decision-making processes. AI planning consequently provides a systematic and efficient way of interacting with the environment and making choices in competitive cases, estimating the plans of the adversaries. From the crudest minimax algorithm to alpha-beta pruning and heuristic evaluation, adversarial search has gone a step ahead due to the fact of high branching factor, horizon effect and limited computation power.