Types of Agents in AI 

Any entity with sensors for sensing its environment and actuators for acting on its environment is an agent in the field of artificial intelligence (AI). Agents are a key component in the design of an intelligent system because agents have the potential to learn, adapt, make choices, and interact with the environment. AI agents are of many types based on their complexity, capabilities, and tasks; they can be as simple as a reflex agent or as sophisticated as a learning agent.

Agents can be grouped into five classes based on their degree of perceived intelligence and capability. All these agents can improve their performance and generate better actions over time. These are given below:

  • Simple Reflex Agent
  • Model-based reflex agent
  • Goal-based agents
  • Utility-based agent
  • Learning agent

1. Simple Reflex agent:

The Simple reflex agents are the simplest agents. These agents make decisions on the basis of the current percepts and ignore the rest of the percept history. These agents only succeed in a fully observable environment. The Simple reflex agent does not consider any part of the percepts’ history during their decision and action process. The Simple reflex agent works on a Condition-action rule, which means it maps the current state to an action. Such as a Room Cleaner agent, it works only if there is dirt in the room.

Problems with the simple reflex agent design approach:

  • They have very limited intelligence
  • They do not have knowledge of non-perceptual parts of the current state
  • Mostly too big to generate and to store.
  • Not adaptive to changes in the environment.
Types of AI Agents

Here is an implementation of a simple reflex agent.

Code:

import matplotlib.pyplot as plt  

import matplotlib.patches as patches  

from time import sleep  

  

# Defining  the simple reflex agent rule  

def agent_Simple_Reflex(loc, stat):  

    if stat == "Dirty":  

        return "Suck"  

    elif loc == "A":  

        return "Move Right"  

    elif loc == "B":  

        return "Move Left"  

  

# Visualizing with the function  

def environment_draw(loc, environment, step, act):  

    fig, ax = plt.subplots(figsize=(6, 3))  

  

    # Draw two rooms, A and B  

    ax.add_patch(patches.Rectangle((0, 0), 3, 3, fill=True, color='gray' if environment["A"] == "Dirty" else 'lightgreen'))  

    ax.text(1.5, 1.5, "A\n" + environment["A"], ha='center', va='center', fontsize=12)  

  

    ax.add_patch(patches.Rectangle((3, 0), 3, 3, fill=True, color='gray' if environment["B"] == "Dirty" else 'lightgreen'))  

    ax.text(4.5, 1.5, "B\n" + environment["B"], ha='center', va='center', fontsize=12)  

  

    # Drawing agent  

    X_agent = 1.5 if loc == "A" else 4.5  

    ax.plot(X_agent, 2.5, marker="o", markersize=20, color="blue")  

    ax.text(X_agent, 2.9, "🤖", ha='center', va='center', fontsize=14)  

  

    # Text info  

    ax.set_title(f"Step {step}: {act}")  

    ax.axis("off")  

    plt.pause(1)  

    plt.close()  

  

# Simulating the  environment and visualizing  

def run_vacuum_world_with_visualization(start_loc, environment):  

    loc = start_loc  

    for step in range(1, 6):  # Run for 5 steps  

        stat = environment[loc]  

        act = agent_Simple_Reflex(loc, stat)  

  

        # Visualization  

        environment_draw(loc, environment, step, act)  

  

        # Updating the environment  

        if act == "Suck":  

            environment[loc] = "Clean"  

        elif act == "Move Right":  

            loc = "B"  

        elif act == "Move Left":  

            loc = "A"  

  

#environment setup  

state_environment = {"A": "Dirty", "B": "Dirty"}  

run_vacuum_world_with_visualization("A", state_environment.copy())  

<textarea>

Output:

Types of AI Agents

The agent cleaned the rooms based on the current perception without memory or planning, but failed to act optimally when both rooms were clean or when actions required memory.

2. Model-based reflex agent

  • The Model-based agent can work in a partially observable environment, and track the situation.
  • A model-based agent has two important factors:
    • Model: It is knowledge about “how things happen in the world,” so it is called a Model-based agent.
    • Internal State: It is a representation of the current state based on percept history.
  • These agents have the model, “which is knowledge of the world” and based on the model they perform actions.
  • Updating the agent state requires information about:
    1. How the world evolves
    2. How the agent’s action affects the world.
Types of AI Agents

Here is an implementation of a model based reflex agent.

Code:

import matplotlib.pyplot as plt  

import matplotlib.patches as patches  

  

# Function of Model-based reflex agent   

def model_based_reflex_agent(loc, percept_stat, model_internal):  

    # Updating the internal model with current percept  

    model_internal[loc] = percept_stat  

  

    # Decision rules  

    if percept_stat == "Dirty":  

        return "Suck"  

    elif model_internal["A"] == "Dirty":  

        return "Move Left"  

    elif model_internal["B"] == "Dirty":  

        return "Move Right"  

    else:  

        return "NoOp"  # Do nothing if everything is clean  

  

# Visualizing with function  

def environment_draw(loc, state_env, step, act):  

    fig, ax = plt.subplots(figsize=(6, 3))  

  

    # Room A  

    ax.add_patch(patches.Rectangle((0, 0), 3, 3, color='gray' if state_env["A"] == "Dirty" else 'lightgreen'))  

    ax.text(1.5, 1.5, "A\n" + state_env["A"], ha='center', va='center', fontsize=12)  

  

    # Room B  

    ax.add_patch(patches.Rectangle((3, 0), 3, 3, color='gray' if state_env["B"] == "Dirty" else 'lightgreen'))  

    ax.text(4.5, 1.5, "B\n" + state_env["B"], ha='center', va='center', fontsize=12)  

  

    # Agent  

    X_agent = 1.5 if loc == "A" else 4.5  

    ax.plot(X_agent, 2.5, marker="o", markersize=20, color="blue")  

    ax.text(X_agent, 2.9, "🤖", ha='center', va='center', fontsize=14)  

  

    ax.set_title(f"Step {step}: {act}")  

    ax.axis("off")  

    plt.pause(1)  

    plt.close()  

  

# Running the model-based reflex agent simulation  

def run_model_based_agent(start_loc, environment):  

    loc = start_loc  

    model_internal = {"A": "Unknown", "B": "Unknown"}  

  

    for step in range(1, 7):  # Run for 6 steps  

        stat_curr = environment[loc]  

        act = model_based_reflex_agent(loc, stat_curr, model_internal)  

  

        environment_draw(loc, environment, step, act)  

  

        # Update environment and loc  

        if act == "Suck":  

            environment[loc] = "Clean"  

        elif act == "Move Right":  

            loc = "B"  

        elif act == "Move Left":  

            loc = "A"  

        elif act == "NoOp":  

            break  # Stop when everything is clean  

  

# Start the environment  

state_environment = {"A": "Dirty", "B": "Dirty"}  

run_model_based_agent("A", state_environment.copy())  

<textarea>

Output:

Types of AI Agents

The agent successfully cleaned both rooms using internal memory to track the environment state, allowing it to behave more intelligently than the simple reflex agent.

3. Goal-based agents

The knowledge of the current state environment is not always sufficient to decide for an agent to decide what to do. The agent needs to know its goal which describes desirable situations. Goal-based agents expand the capabilities of the model-based agent by having the “goal” information.

They choose an action so that they can achieve the goal. These agents may have to consider a long sequence of possible actions before deciding whether the goal is achieved or not. Such considerations of different scenarios are called searching and planning, which makes an agent proactive.

Types of AI Agents

Here is an implementation of a Goal Based agent.

Code:

import matplotlib.pyplot as plt  

import matplotlib.patches as patches  

  

# Planning the logic for goal-based agent  

def goal_based_agent(loc, world_state, goal):  

    # Plan: Clean both rooms  

    plan = []  

  

    # If current room is dirty, clean it  

    if world_state[loc] == "Dirty":  

        plan.append("Suck")  

    else:  

        # Check if the other room is dirty and go there  

        other = "B" if loc == "A" else "A"  

        if world_state[other] == "Dirty":  

            move_act = "Move Right" if loc == "A" else "Move Left"  

            plan.append(move_act)  

        else:  

            plan.append("NoOp")  # Do nothing if goal is achieved  

    return plan[0]  

  

# Visualization function  

def draw_vacuum_world(loc, world_state, step, act):  

    fig, ax = plt.subplots(figsize=(6, 3))  

  

    # Room A  

    ax.add_patch(patches.Rectangle((0, 0), 3, 3, color='gray' if world_state["A"] == "Dirty" else 'lightgreen'))  

    ax.text(1.5, 1.5, "A\n" + world_state["A"], ha='center', va='center', fontsize=12)  

  

    # Room B  

    ax.add_patch(patches.Rectangle((3, 0), 3, 3, color='gray' if world_state["B"] == "Dirty" else 'lightgreen'))  

    ax.text(4.5, 1.5, "B\n" + world_state["B"], ha='center', va='center', fontsize=12)  

  

    # Agent  

    X_agent = 1.5 if loc == "A" else 4.5  

    ax.plot(X_agent, 2.5, marker="o", markersize=20, color="blue")  

    ax.text(X_agent, 2.9, "🤖", ha='center', va='center', fontsize=14)  

  

    ax.set_title(f"Step {step}: {act}")  

    ax.axis("off")  

    plt.pause(1)  

    plt.close()  

  

# Simulate the Goal-Based Agent  

def run_goal_based_agent(start_loc, environment):  

    loc = start_loc  

    goal = {"A": "Clean", "B": "Clean"}  # Define goal  

  

    for step in range(1, 7):  

        stat_curr = environment[loc]  

        act = goal_based_agent(loc, environment, goal)  

  

        draw_vacuum_world(loc, environment, step, act)  

  

        # Update environment  

        if act == "Suck":  

            environment[loc] = "Clean"  

        elif act == "Move Right":  

            loc = "B"  

        elif act == "Move Left":  

            loc = "A"  

        elif act == "NoOp":  

            break  # Stop if goal is achieved  

  

# Initial state  

env_initironment = {"A": "Dirty", "B": "Dirty"}  

run_goal_based_agent("A", env_initironment.copy())  

<textarea>

Output:

Types of AI Agents

By organizing its activities according to the current and goal states and halting when the goal was reached, the agent was able to accomplish its objective of cleaning both rooms.

4. Utility-based agents

These agents are similar to the goal-based agent but provide an extra component of utility measurement, which makes them different by providing a measure of success at a given state. Utility-based agents act based not only on goals but also the best way to achieve the goal.

The Utility-based agent is useful when there are multiple possible alternatives, and an agent has to choose in order to perform the best action. The utility function maps each state to a real number to check how efficiently each action achieves the goals.

Types of AI Agents

Here is an implementation of a Utility Based agent.

Code:

# Importing Libraries  

import matplotlib.pyplot as plt  

import matplotlib.patches as patches  

  

def utility_calcu(state_env, act):  

    utility = 0  

    if state_env["A"] == "Clean" and state_env["B"] == "Clean":  

        utility += 100  

    elif state_env["A"] == "Dirty" or state_env["B"] == "Dirty":  

        utility += 10  

  

    if act == "Suck":  

        utility += 0  

    elif act.startswith("Move"):  

        utility -= 5  

    elif act == "NoOp":  

        utility -= 1  

  

    return utility  

  

# decision-making on the base of utility  

def utility_based_agent(loc, state_env):  

    possible_acts = ["Suck", "Move Right", "Move Left", "NoOp"]  

    best_act = None  

    utility_best = float('-inf')  

  

    for act in possible_acts:  

        state_simulated = state_env.copy()  

        sloc_simulated = loc  

  

        # taking the effect of action  

        if act == "Suck":  

            state_simulated[sloc_simulated] = "Clean"  

        elif act == "Move Right":  

            sloc_simulated = "B"  

        elif act == "Move Left":  

            sloc_simulated = "A"  

  

        utility = utility_calcu(state_simulated, act)  

        if utility > utility_best:  

            utilityutility_best = utility  

            best_act = act  

  

    return best_act  

  

# Visualizing with function  

def environment_draw(loc, state_env, step, act):  

    fig, ax = plt.subplots(figsize=(6, 3))  

  

    # Drawing Room A  

    ax.add_patch(patches.Rectangle((0, 0), 3, 3, color='gray' if state_env["A"] == "Dirty" else 'lightgreen'))  

    ax.text(1.5, 1.5, "A\n" + state_env["A"], ha='center', va='center', fontsize=12)  

  

    # Drawing Room B  

    ax.add_patch(patches.Rectangle((3, 0), 3, 3, color='gray' if state_env["B"] == "Dirty" else 'lightgreen'))  

    ax.text(4.5, 1.5, "B\n" + state_env["B"], ha='center', va='center', fontsize=12)  

  

    # Drawing the Agent  

    X_agent = 1.5 if loc == "A" else 4.5  

    ax.plot(X_agent, 2.5, marker="o", markersize=20, color="blue")  

    ax.text(X_agent, 2.9, "🤖", ha='center', va='center', fontsize=14)  

  

    ax.set_title(f"Step {step}: {act}")  

    ax.axis("off")  

    plt.pause(1)  

    plt.close()  

  

# Running the  utility-based simulation  

def run_utility_agent(start_loc, state_env):  

    loc = start_loc  

    for step in range(1, 7):  

        stat_curr = state_env[loc]  

        act = utility_based_agent(loc, state_env)  

  

        environment_draw(loc, state_env, step, act)  

  

        # Performing the  action  

        if act == "Suck":  

            state_env[loc] = "Clean"  

        elif act == "Move Right":  

            loc = "B"  

        elif act == "Move Left":  

            loc = "A"  

        elif act == "NoOp":  

            break  # stop if action is do nothing  

  

# Starting the  simulation  

env_init = {"A": "Dirty", "B": "Dirty"}  

run_utility_agent("A", env_init.copy())  

<textarea>

Output:

Types of AI Agents

By effectively balancing cleaning with minimal movement, the agent chose actions that maximized its utility and avoided pointless operations after utility was maximized.

5. Learning Agents

A learning agent in AI is a type of agent that can learn from its past experiences, or it has learning capabilities. It starts to act with basic knowledge and then is able to act and adapt automatically through learning. A learning agent has mainly four conceptual components, which are:

  1. Learning element: It is responsible for making improvements by learning from the environment
  2. Critic: Learning element takes feedback from the critic, which describes how well the agent is doing with respect to a fixed performance standard.
  3. Performance element: It is responsible for selecting external action
  4. Problem generator: This component is responsible for suggesting actions that will lead to new and informative experiences.
  5. Hence, learning agents are able to learn, analyze performance, and look for new ways to improve performance.
Types of AI Agents

Here is an implementation of a Learning agent.

Code:

import random  

import matplotlib.pyplot as plt  

import matplotlib.patches as patches  

  

# Agent that learns cleaning strategy over time  

class SmartVacuum:  

    def __init__(self):  

        # Stores learned value of each (room, state) pair  

        self.memory = {  

            ("A", "Dirty"): 10,  

            ("B", "Dirty"): 10,  

            ("A", "Clean"): 0,  

            ("B", "Clean"): 0  

        }  

        self.rate = 0.5    # How much new info changes memory  

        self.discount = 0.9  # Not used in this version  

  

    def decide(self, room, condition):  

        # Very basic policy: clean if dirty, else switch room  

        if condition == "Dirty":  

            return "Clean"  

        return "Go Right" if room == "A" else "Go Left"  

  

    def update_memory(self, room, condition, points):  

        state = (room, condition)  

        old_val = self.memory.get(state, 0)  

        self.memory[state] = old_val + self.rate * (points - old_val)  

  

# Simple visual feedback  

def draw_rooms(agent_spot, room_state, count, move):  

    fig, ax = plt.subplots(figsize=(6, 3))  

  

    # Show room A  

    ax.add_patch(patches.Rectangle((0, 0), 3, 3, color='gray' if room_state["A"] == "Dirty" else 'lightgreen'))  

    ax.text(1.5, 1.5, f"A\n{room_state['A']}", ha='center', va='center', fontsize=12)  

  

    # Show room B  

    ax.add_patch(patches.Rectangle((3, 0), 3, 3, color='gray' if room_state["B"] == "Dirty" else 'lightgreen'))  

    ax.text(4.5, 1.5, f"B\n{room_state['B']}", ha='center', va='center', fontsize=12)  

  

    # Draw agent  

    bot_x = 1.5 if agent_spot == "A" else 4.5  

    ax.plot(bot_x, 2.5, marker="o", markersize=20, color="navy")  

    ax.text(bot_x, 2.9, "🤖", ha='center', va='center', fontsize=14)  

  

    ax.set_title(f"Turn {count}: {move}")  

    ax.axis("off")  

    plt.pause(0.8)  

    plt.close()  

  

# Running the agent through steps  

def run_simulation():  

    # Start with random dirt in rooms  

    rooms = {  

        "A": random.choice(["Clean", "Dirty"]),  

        "B": random.choice(["Clean", "Dirty"])  

    }  

  

    bot_place = "A"  

    cleaner = SmartVacuum()  

  

    for i in range(1, 11):  

        condition = rooms[bot_place]  

        next_move = cleaner.decide(bot_place, condition)  

  

        draw_rooms(bot_place, rooms, i, next_move)  

  

        # Action results in reward or penalty  

        if next_move == "Clean":  

            reward = 10  

            rooms[bot_place] = "Clean"  

        elif "Go" in next_move:  

            reward = -1  

            bot_place = "B" if bot_place == "A" else "A"  

        else:  

            reward = 0  

  

        # Agent updates its value estimate  

        cleaner.update_memory(bot_place, rooms[bot_place], reward)  

  

# Start the program  

run_simulation()  

<textarea>

Output:

Types of AI Agents
Types of AI Agents

Through experience-based behavior modification and reward-based learning, the agent gradually enhanced its performance, leading to more intelligent and effective cleaning.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *