Category: 02. Intelligent Agent

https://cdn3d.iconscout.com/3d/premium/thumb/robot-character-wearing-a-counseling-headset-3d-icon-png-download-11431383.png

  • What is Peas in AI

    While there are many types of agents running in Artificial Intelligence (AI), these are all agents that serve different purposes. On the level of performance, environment, actuators, and sensors, the PEAS system is a major framework for these types of agents. To understand how different AI agents excel in most environments, there is a system called PEAS.

    Assuming such are available, one of them, such as Rational Agents, is the most efficient since they will always take the optimal path for maximum efficiency.

    Peas In AI

    Performance Measure

    Defining Performance Measures in AI

    Artificial intelligence performance measures are the measures that are used to appraise a strong framework’s progress. These actions are quantitative or subjective matters of assessing how well the framework does the tasks assigned to be done by it. The importance of the decision of the execution measures lies in the fact that it will determine whether an artificial intelligence framework is valid and capable of doing the job it was designed for.

    Types of Performance Measures

    Depending on the particular idea of the artificial intelligence system as well as the nature of its specific assignments, there are different sorts of performance measures. The normal exhibition measures incorporate exactness, accuracy, review, F1-score, mistake rate, and proficiency. The appropriate measure for the artificial intelligence framework will be determined based on the goals of the artificial intelligence framework and the properties of the problem it is trying to solve.

    Role of Performance Measures in AI

    Performance measures are indispensable elements in the planning and improvement of an artificial intelligence framework. The optimisation step is guided by them, which means the AI (computer-based intelligence) design can quantify the framework to deliver better results. Additionally, execution estimates enable the connection of various artificial intelligence models and computations so as to pick the best methodology for a specific issue.

    Types of Environments

    Different kinds of circumstances exist for the artificial intelligence frameworks, and they can perform their operations in both controlled and deterministic, and dynamic and unusual conditions. Although some artificial intelligence applications, for example, advanced mechanics, perform in real, factual situations, there are applications such as normal language handling, managing particular or virtual or computerised spaces. Such an environment has a sizable impact on the intricacy of a framework’s artificial intelligence-related activities.

    • Fully Observable & Partially Observable
    • Episodic & Sequential
    • Static & Dynamic
    • Discrete & Continuous
    • Deterministic & Stochastic

    Significance of Environment Modelling

    For artificial intelligence frameworks to make powerful decisions, this modelling of the environment is crucial. The more an AI framework understands its current situation, the more likely it is to achieve the goals that it set for itself. Usually, ecological displaying involves assembling information, feeling tactile information streams, and portraying the environmental components the sound system framework can utilise to direct.

    1. Fully Observable & Partially Observable:

    • Fully Observable:The specialist can see instantly the condition of the environment in a noticeable environment at a certain time.
    • Somewhat Perceptible:The specialist cannot see entirely the environment conditions in a slightly discernible environment. It had deficient or boisterous perceptions.

    2. Static and Dynamic:

    • Static:The climate in a static environment does not change while the specialist is thinking. It does not vary over time.
    • Dynamic:The climate (environmental) can change in a powerful environment, especially when the specialist is operating to decide on their activities. There were the activities of the specialist or external variables that could cause changes to occur.

    3. Discrete and Nonstop:

    • Discrete:The state space as well as the activity space are both limited and countable in a discrete environment.
    • Continuous:In a constant environment, the state space, activity space, or both are stable (inconsistent with discrete qualities), that is, it values the realm of values on it in contrast to discrete qualities.

    4. Deterministic and Stochastic:

    • Deterministic:This circumstance of the environment, however, is not set in stone (given by the present status and action made by the specialist) in a deterministic environment.
    • Stochastic:This includes vulnerability in a stochastic environment. Irregularity or likelihood affects the following condition of the environment, regardless of whether the present status and activity are known.

    Actuators

    The artificial intelligent system includes parts or some components that perform the activities or secondary reactions defined by the smart system as actuators in artificial intelligence. These are the ways through which the artificial intelligence framework interacts with the environment. Depending on the use case, actuators are available in different shapes and configurations.

    Types of Actuators

    Functional attributes of actuators are a field of Artificial Intelligence, which can be classified in different ways. For example, they can be actuators (engines or servos) that control the development of robot attachments in advanced mechanics. Actuators can be programmed parts that function to create text, speech, or visual results (as a consequence) under virtual conditions.

    The Job of Actuators in an Artificial Intelligence System?

    The extension of dynamic cycles of an artificial intelligence framework and the effect of these cycles on the environment is the basis of actuators. In light of how the artificial intelligence system comprehends the climate and what performance measures DWORD intended to achieve, they execute the activities or commands created by the artificial intelligence system. The actuator’s viability and the exactness of the actuators are significant in determining the whole performance of artificial intelligence applications.

    Peas In AI

    Sensors

    Sensing the Environment in AI

    These are instrumental parts that gather information and data about environmental elements and use artificial intelligence technology as input for the same. Urgent information that they provide them outfit the environment intelligence framework to build a picture of what it is seeing and understanding about it, what is going on in its environment. These are the artificial intelligence system’s sensory apparatus working with all-around informed independent direction.

    The Significance of Sensors in AI

    An artificial intelligence system depends on sensors as they provide the crude information that drives dynamic cycles. Basic elements to the exactness and unwavering quality of the sensors, as any inconsistency or mistake made in the sensor data will confound the artificial intelligence system to perform flawed exercises. Then, calibration and sensor blend techniques are employed to make the sensors even more accurate.

    Integrating PEAS Components

    1. Achieving AI Intelligence through Integration

    What it means is that AI systems exhibit intelligent behaviour because it is the efficient integration of the PEAS components. Performance metrics guide the AI’s decision-making, and it understands the environment such that it can adapt to changing conditions. When sensors and necessary operations supply the required input are carried out with actuators, and the system is said to be a closed-loop system.

    2. Challenges in PEAS Integration

    Integrating PEAS components in a sophisticated AI system is often infeasible. Actuators should be designed and tested thoroughly so they react appropriately to the AI decision, and sensors provide accurate data. The next crucial step is to pick appropriate performance metrics that match the goals of the AI system.

    3. Case Studies for the Integration of PEAS

    As an example, below I considered a self-driving automobile to illustrate the idea of PEAS integration. In this case, the performance of the car to its destination with a quick and safe ride can be used as the performance metric. The environment is the weather, the traffic, and the road.

    The vehicle’s actuating elements (also known as actuators) control the vehicle’s braking, steering, and acceleration, and the vehicle’s sensing elements (also known as sensors) obtain data from GPS, LiDAR, and cameras so that all the form data can be used to guide navigation and make decisions.

    Types of Agents Using PEAS

    The use of the PEAS framework (Performance Measure, Environment, Actuators, Sensors) is required for building and understanding different types of AI agents. Based on the complexity and the extent of use of the PEAS model, the agents are broadly categorised.

    Reactive Agents: Simple and Reflexive Applications of PEAS

    The key to our AI agent is a reactive agent. Specifically, they take raw sensory tokens to outputs without holding a state and without planning. Such agents are interfaced with conditions or predefined rules to respond to the environment.

    How PEAS Applies?

    • Performance Measure: It emphasises correctness or speed up which minimises errors or maximises speed. An example was given that a vacuum cleaner agent could have its performance measure be the amount of dirt it cleans every time, for example.
    • Environment: Typically, they are employed in static or semi-static ones with changes regularly or not at all, for instance, cleaning a room or recording the temperature of a controlled area.
    • Actuators: For example, it can turn on/ off switches, move left or right, or trigger (e.g., start suction of a vacuum cleaner).
    • Sensors: For a robotic vacuum that depends on the external dirt sensor, for a thermostat that measures temperature, or for basic sensors that can sense the current state of the environmental conditions.

    Examples of Reactive Agents:

    • Automatic vacuum cleaners (e.g., Roomba)
    • Basic thermostats
    • Motion-detection security systems

    Advantages:

    • Fast and efficient in predictable environments.
    • Simple and cost-effective to implement.

    Limitations:

    • The ability to change or to learn from experience; not the ability to become adaptable to changes that are not expected or to experience effects from not being able to understand.
    • Limited scope of functionality in dynamic or complex environments.

    Deliberative Agents: PEAS in Planning and Decision-Making

    Reactive agents are less basic than deliberative agents. They can have an internal environment model that helps in learning and making informed decisions. In fact, there are often agents who actually exhibit directed behaviour towards an objective and, in fact, use planning algorithms.

    How PEAS Applies?

    • Performance Measure: In this, it has a set of long-term objectives to complete a task in the shortest possible time or in the least possible efficient manner to reach a destination.
    • Environment: Typically, more complex, dynamic, and partially observable. They could include, such as navigation to the city, delivery of packages, etc.
    • Actuators: Advanced activities, route selection, task execution, manipulation of object(s) according to planned strategy.
    • Sensors: The third generation sensors are what we call the cameras and other things that generate super detailed input from the visual and location sources, respectively.

    Examples of Deliberative Agents

    • Autonomous vehicles plan optimal routes.
    • Delivery drones.
    • Scheduling systems for manufacturing processes.

    Advantages

    • Capable of handling complex environments with dynamic changes.
    • It can decide the goals available along with the resources currently at hand.

    Limitations

    • Computationally intensive due to planning and decision-making processes.
    • Perhaps it must know the environment.

    Learning Agents: Integration of Feedback for Evolving PEAS Configurations

    It is therefore the highest class of the most adaptable and intelligent agents. And consequently, they will become better and can learn from what they have done. Furthermore, this feedback from the environment makes them more able to make their decisions and take their actions.

    How PEAS Applies?

    • Performance Measure: Metrics that improve with learning, for instance, decreasing error rate, accuracy, or task time minimum, etc.
    • Environment: They are complex and dynamic spaces that have no predictable names.
    • Actuators: The Flexibility and adaptability of these actions are based on the first ones, which are learning algorithms. For example, if we are thinking about some near-gameplay adaptive actions (if thinking of some game-play related actions), or if we are thinking of an alternate trading strategy for a game (or updating a trading algorithm).
    • Sensors: Continuously and rich and diverse input sensors that draw from the video feed as well as other video feeds and user interaction.

    Examples of Learning Agents:

    • Recommendation systems (e.g., Netflix, Amazon).
    • Gaming applications of AI (e.g., AlphaGo and Chess entries).
    • Personal assistants (e.g., Siri, Google Assistant).

    Advantages:

    • Over time, it can change greatly, and it can merge into other places.
    • Handles complex and unpredictable scenarios effectively.

    Limitations:

    • High computational and data requirements.
    • Risk of overfitting or incorrect learning in the absence of proper feedback mechanisms.

    AI PEAS Examples

    To demonstrate the PEAS framework, I will examine some instances.

    Driverless Cars

    • Performance Measure:The measures for driverless cars are to guarantee passenger safety and punctual arrivals, safety navigation, and effective route planning.
    • Environment:In addition to the roads, traffic patterns, pedestrians, and weather, the environment must be traversed by the automobile.
    • Actuators:The braking, steering, and accelerating systems of the car’s parts are carried by actuators that do what the AI has prescribed.
    • Sensors:Sensors on the car, such as Lidar, cameras, GPS, radar, and more, let the vehicle see and respond to the world around it in real time.

    Industrial Robots

    • Performance Measure: You have to make sure that you have reached the maximum productivity. Secondly, you need to get the maximum precision of your processes, the third is to minimise the energy demand, and the last is to minimise the downtime.
    • Environment: Factory floors, conveyor belts, machines, raw materials, and other robots or workers.
    • Actuators: These can be grippers, robotic arms, conveyor motors, welding, or painting tools, etc.
    • Sensors: Cameras, proximity sensors, force sensors, temperature sensors, and motion detectors.

    Virtual Assistants (e.g., Alexa, Siri)

    • Performance Measure: It finds out and responds to user queries correctly and executes the command as soon as possible, thus making the user happy.
    • Environment: Smart devices like voice inputs and smart home devices, mobile apps, and cloud.
    • Actuators: Speakers, app interfaces, text-to-speech converters, and smart device controllers all produce the output of text-to-speech converters.
    • Sensors: Data sources via microphones through the voice input and APIs.

    Gaming AI (e.g., Chess Engines, Video Game NPCs)

    • Performance Measure: The game will become quite a hard yet pleasant game, where players wish to win the game or just raise their score for players.
    • Environment: Game board (e.g., chess), virtual worlds, other players, and in-game rules.
    • Actuators: In addition, having the ability to move chess pieces, control in-game characters, trigger animations, or events.
    • Sensors: A player action, the game state, and environmental conditions (enemy position in an RPG).
  • What is the Turing Test in AI

    In 1950, Alan Turing introduced a test to check whether a machine can think like a human or not; this test is known as the Turing Test. In this test, Turing proposed that a computer can be said to be intelligent if it can mimic human responses under specific conditions.

    Turing introduced the Turing Test in his 1950 paper, “Computing Machinery and Intelligence,” which considered the question, “Can machines think?”

    Turing Test in AI

    The Turing test is based on a party game, “Imitation Game,” with some modifications. This game involves three players, in which one player is the Computer, another player is the human responder, and the third player is the human Interrogator, who is isolated from the other two players and whose job is to find which player is the machine among the two of them.

    Consider Player A is a computer, Player B is human, and Player C is an interrogator. The interrogator is aware that one of them is a machine, but he needs to identify this on the basis of questions and their responses.

    The conversation between all players is via keyboard and screen, so the result would not depend on the machine’s ability to convert words into speech.

    The test result does not depend on each correct answer, but only on how closely it responds like a human answer. The computer is permitted to do everything possible to force a wrong identification by the interrogator.

    The questions and answers can be like:

    Interrogator: Are you a computer?

    Player A (Computer): No

    Interrogator: Multiply two large numbers, such as (256896489*456725896)

    Player A: Long pause and gives the wrong answer.

    In other words, if an interrogator cannot tell which is human and which is a machine, the computer passes the test, and it is said to be intelligence or a thinking machine.

    The prize competition was announced in 1991 by New York businessman Hugh Loebner, offering $100,000 to the first computer to pass the Turing test. But no AI program to date has yet passed an undiluted Turing test.’

    History of the Turing Test

    In the history of Artificial Intelligence (AI), the 1950s Turing Test introduced by Alan Turing is a very remarkable milestone. In his paper ‘Computing Machinery and Intelligence,’ he came to light. To answer this profound question, Turing sought to replicate machine intelligence that is comparable to the human kind.

    Turing had become interested in the idea of creating thinking machines that display intelligent behavior; this curiosity was the basis of it. The Turing Test was his practical method for deciding whether such a machine could converse naturally with a human being enough to be considered human.

    This test was the foundation of AI research and the first discussion of machine intelligence because of Turing’s work on it. This offered a basis for the evaluation of AI systems. The Turing Test has changed structurally and continues to form a hobby relative to improvement and the need for debate. Nevertheless, it was of huge historical importance in developing AI and continues to be of motivation to current researchers and a benchmark to measure AI progress.

    Variations of the Turing Test

    New versions of the Turing Test have been proposed throughout the years in an attempt to overcome these limitations and reflect on the true capabilities of an AI:

    1. Total Turing Test: It is an extended version of the Turing test that is not restricted to text-based conversation. It determines how well the machine understands and reacts to not only words, but also visual and physical cues that the interrogator provides. It includes seeing the things shown to it and taking the desired actions about them. Essentially, it tried to determine whether the AI can walk in the world in a way that reflects a more in-depth understanding of the world.
    2. Reverse Turing Test: Here, the roles are reversed, with a twist on the traditional Turing Test. Here, it is the machine itself that acts as the interrogator. It is supposed to distinguish humans from other machines based on the responses it gets. With this reversal, the AI is put to the test to evaluate the intelligence of your kind, as it becomes possible to detect artificial intelligence.
    3. Multimodal Turing Test: The Multimodal Turing Test is a concept to assess AI’s capability to process and respond to various modes of communication at the same time in a world where communication can be in many ways. The thing looks into whether or not AI can effortlessly process and reply to text, speech, images, and maybe even other modalities all at once. This was a variation that accepted the many ways in which we communicate and asked if the AI can run with the complex ways in which we engage.

    Chatbots to attempt the Turing test

    ELIZA: Joseph Weizenbaum was the creator of ELIZA, a Natural language processing computer program. The reason for being was to prove the ability of machines to communicate with humans. The Turing Test always implied an attempt to bring forth one of the first chatterbots.

    Parry: Kenneth Colby created a chatterbot named Parry in 1972. Parry was designed to simulate a schizophrenic individual (the most common chronic mental disorder). They described Parry as ‘ELIZA with attitude’. In the early 1970s, Parry was tested using a variant of the Turing Test.

    Eugene Goostman: Eugene Goostman, a chatbot, was created in Saint Petersburg in 2001 and participated in various versions of the Turing Test. Goostman won the competition declared to be the largest Turing test competition in the world, with 29 percent of judges being fooled into thinking it was human. Goostman resembled a 13-year-old virtual boy.

    The Chinese Room Argument

    Many philosophers were against the whole concept of Artificial Intelligence. I heard of the most famous argument on this list, that of the ‘Chinese room’.

    In 1980, John Searle produced the thought experiment in his paper ‘Mind, Brains, and Program’ called the ‘Chinese Room’, and it contradicted the notion of Turing’s test. In his argument, he said, Programming a computer will make a computer capable of talking in words, but a real understanding of words and consciousness of a computer.

    Whatever those devices (Machines such as ELIZA and Parry) might pass the Turing test with ease by playing with the keywords and symbols, they at once had no real understanding of language, he added. Therefore, it is not even a ‘thinking’ capability, such as a human.

    Features Required for a Machine to Pass the Turing Test

    • Natural language processing: But in this case, the most frequent format for the Interrogator’s communication with us is in a language that would be used by humans in general, for example, in English.
    • Knowledge representation: As a means of storage and retrieval of information during the test.
    • Automated reasoning: Answering these is a matter of using the information already stored.
    • Machine learning: Adapters that should be able to allow the business services to adapt to the new changes (through the new collaboration models) and the patterns (that are inherent in business services).
    • Vision (For total Turing test): A test is something that recognises the action of the interrogator and other objects.
    • Motor Control (For total Turing test): This: To do if what is asked is triggered.

    Limitations of the Turing Test

    • Not a True Measure of Intelligence: That’s not only that; it’s not even machine intelligence, and even less so machine consciousness just because it passes the Turing Test. This type of criticism of the computer’s power to recreate human-like responses without or with understanding or consciousness is John Searle’s ‘Chinese Room’ objection.
    • Simplicity of Test Scenarios: As to the Turing test, the world of human attention that will be occupied in text-based human interaction will lack all of what a machine can and cannot do to see and respond to the world.

    Applications of the Turing Test in Artificial Intelligence

    Role in Chatbot and Virtual Assistant Development

    Another fascinating type of AI application is Chatbots and virtual assistants such as Chatgpt, Alexa, Siri, etc., which try to replicate the same human communication with all its merits.

    • Designing Human-like Conversations: Developers often create chatbots so that the chatbot can as much as possible pass the Turing Test, or pass as a human. This is really about being in context, changing tone, and returning vehicles, etc, of a given context.
    • The Evaluation of User Engagement: When searching for such AI systems that meet the Turing Test criteria, they are found to be smarter in dealing with the human desire to have human interest between humans and humans. For example, most of the historical performance of the virtual assistants has included several conversational innuendoes joined together to form a ‘personalised’ nature, i.e., humour, empathy, etc.
    • Iterative Improvement Based on Feedback: This is done by methods inspired by the Turing Test, following the form of Continuous Refinement Through Feedback. Systems that are refined are those that do not rely on their ability to handle ambiguous queries, while at the same time being able to detect humour and give intelligent responses.

    Advancing Natural Language Understanding

    The development of natural language understanding (NLU) plays a key role in AI at a speed due to the Turing test.

    • Contextual Understanding: The idea of Turing is about contextual understanding. For example, it is also in such demand that AI systems have had to invent sophisticated processing algorithms to run in the background and generate flowing, following conversations.
    • Semantic Analysis and Disambiguation: Therefore, AI models have been trained such that they can mitigate the close semantic difference between different senses of polysemous words and sentences over a context in order to pass the Turing Test.
    • Enhanced Machine Translation: The need for the transmission of information in a human way, in such a way that its accuracy and cultural importance, beyond being compliant with the natural language patterns, encourage machine translation.

    Benchmarking Human-like Behaviour in AI Systems

    The Turing Test, therefore, becomes a sense measure for the behaviour of an AI system as to whether or not it is ‘human’.

    • Evaluating Conversational Competence: For example, it is assessed by its ability to engage in human forms of thought in interaction. If we do so, it is a good sign because we are working on something to get human-equivalent intelligence for some of the tasks related to that.
    • Comparative Analysis across AI Systems: This test can classify the area(s) of strengths and the areas where the AI systems need to improve. It encourages greater competition and innovation on the part of developers to streamline more and better models.
    • Setting Goals for General AI: While the inspired long-term goals for developing general AI systems that can perform a variety of human-like tasks are specific to the domain, even for current AI systems, it is still too many.

    Advantages of the Turing Test

    • Simplicity and Intuitive Understanding: The Tutoring Test is an easy-to-understand and simple process. Defined as testing something that is around human interaction, something that is intuitive to the point that even an average person with technical knowledge can test the machine’s intelligence.
    • Focus on Practical Outcomes: In the end, this test will be judged to be desirable as it is simply a conceived mesh system of humans and machines.
    • Encourages Natural Language Processing (NLP) Development: Since it is an NLP-based test, it has good development in NLP, so it’s a necessary part of chatbots, virtual assistants, and basically all conversational AI systems.
    • Universal Appeal: Whether solved using a problem or not, and whether respecting a Turing test or not, we can talk about the Turing Test not needing any problem solution as a dependency.
    • Historical and Philosophical Significance: The Turing Test became a sort of hot point of philosophical discussion on the topic of intelligence and consciousness, at the same time being a most popular area of AI and cognitive science research.

    Disadvantages of the Turing Test

    • Ambiguity in Defining Intelligence: The test does not examine other types of intelligence, e.g., creativity, problem-solving, or ethical reasoning, but tests the machine’s capacity to imitate human behaviour.
    • Focus on Deception: Once one has passed the Turing Test, it often means tricking the human judge rather than having real intelligence. And it is precisely this reliance on deception that the superficial results from AI do not tell of true progress.
    • Bias toward Human-Like Behaviour: The problem here is that it assumes human-like behaviour of intelligence, and there is no reason that we cannot be as an intelligent or indeed more so than nonhuman beings than humans.
    • Limited Scope: Talking is what is meant by the Turing Test, and little or nothing else is involved. Physical tidy-up, perceiving, and reasoning.
    • Vulnerability to Predefined Scripts: Such script responses or loopholes circumvent the test to mislead about the actual power of AI systems, possibly.
    • Ethical and Philosophical Critiques: Philosophers like John Searle (Chinese Room Argument) have questions about whether passing the Turing Test stood in for intelligence, understanding, or awareness and, by extension, whether it is a valid test of intelligence.
  • Agent and Environment in Artificial Intelligence

    In Artificial Intelligence, an ‘agent’ is an intelligent system with sensors and actuators that operates in some environment and is attempting to satisfy certain objectives. The environment refers to the outside world around which the agent lives and describes the context in which the agent takes action.

    Agent and Environment in Artificial Intelligence (AI)

    The agents can vary from basic thermometers to sophisticated robots. Each one follows a perception-action control cycle, acting based on the information it extracts from the environment, and no two are alike. This is the unique thing about AI: interaction-being able to sense an environment as some agent and act adequately, being able to perceive what is important.

    An agent in AI is an autonomous object in the world that can perceive and respond. That’s because when you have an agent in your environment, you have to adapt to the climate as well essentially and the agent is going to have to make some series of decisions.

    Examples of Agents:

    • Robotic Vacuum Cleaners: They can identify the presence of obstacles and clean the floors effectively with the help of sensors used in them.
    • Self-Driving Cars: Self-driving cars use their senses to perceive the road environment and make decisions on the road to drive safely based on the current conditions.
    • Virtual Assistants: Virtual assistants like Alexa or Siri interpret the inputs from the users, comprehend commands, and perform tasks ranging from sending reminders to even operating smart devices.

    What is an Environment in AI?

    The environment in the context of Artificial Intelligence, therefore, refers to the conditions surrounding an agent in order to achieve a given task. It is the circumstance in which the agent performs and which furnishes it with feedback. A physical space or a virtual space can be designed to model real processes or to model concepts.

    The agent gets feedback from the environment in terms of its action, and the environment itself determines the amount of rewards it will gain from the completion of its goals.

    Examples of Environments

    • A maze for a robot navigating toward a goal.
    • A virtual environment in which an AI can play a game and confront other players.
    • Real-life environment of AVs, such as traffic conditions, weather, and road conditions, among others.
    • By this, AI systems are developed to work in a certain way to overcome such issues and accomplish their aims in any relevant environment.

    Agent Terminology

    It is also vital to get familiar with the key opinions regarding agents to understand better how they work in their environments:

    • Percept: Percept refers to the signal that an agent receives from its environment through its sensors or input that an agent can receive or acquire from its surroundings. For instance, a robotic vacuum cleaner infers the layout of the area and/or objects that it may encounter within the region.
    • Percept Sequence: The entire record of all the percepts an agent has perceived since a certain time. It assists the agent in making decisions based on the past inputs it has received. For example, a self-driving car utilizes a percept sequence to make changes in its decision-making process based on previous experience in traffic patterns.
    • Action: The action is the response or output of the perceived percept or percept sequence that the agent initiates. Actions are taken through appendages that include a robot’s arm, fingers, and other executing instruments in an AI-assisted system, such as making a move in a game of chess.

    Types of Intelligent Agents

    Intelligent agents are classified according to their functions and capabilities, as well as the intelligence that they possess.

    Simple Reflex Agents

    These agents operate within a current state and are not cognizant of historical data. Answers come from the event-condition-action in which the users trigger an event, and next, the agent will look to the event-condition-action list and the corresponding predefined actions.

    Model-based Reflex Agents

    These agents act like reflex agents, but their knowledge of their surroundings is more extensive. Subsequently, a model of the world is built into the internal system that includes the agent’s history.

    Goal-based Agents

    These are the agents that are also known as the rational agents since they contain, in addition to the information stored by the model-based agents, goal information or information describing a desirable world.

    Utility-based agents

    These agents are similar to goal-based agents, but they have an extra utility range that compares each of them based on the goal and chooses the best action. Some examples of the rating criteria include the probability of success or the necessary resources.

    Types of Environment in AI

    Several kinds of AI environments are regularly applied. Some of the types of environments in AI include deterministic, stochastic, fully observable, partially observable, continuous, discrete, episodic, sequential, static, dynamic, single-agent, multi-agent, competitive, and collaborative.

    Fully Observable vs Partially Observable Environment

    Based on how much information the agent has about the status of the environment at any given moment, the first type of environment in AI can be either fully or partially observed.

    The condition in which the agent has complete knowledge of the environment’s present state is known as the fully observable environment. Specializing environment: The agent alone has access to every aspect of the environment needed for making decisions. Checkers, chess, and other games are instances of fully visible environments.

    A partially observable environment is one where an agent cannot get a full view of the environment at a specific moment. The agent can only interact with a part of the environment, and the part of the environment that is inaccessible to the agent may be a number of things. A few examples of partially observable environments are when you are trying to drive your car through traffic.

    Deterministic vs Stochastic

    The environment in artificial intelligence can be categorized as stochastic or deterministic based on the predictability of the results from an agent’s action. An environment is said to be deterministic if the result of an action can be predicted with absolute certainty. In fact, the outcome of an agent’s behavior is entirely determined by the condition of the environment. When an agent’s actions directly result in consequences, the environment is said to be deterministic.

    Examples of deterministic environments are cases where the environment has clear and predictable responses, such as simple mathematical problems where the result of each arithmetic operation is distinctly described.

    A stochastic environment, on the other hand, is one where the results of an action are not guaranteed but include probability. The environment just has a part in producing the result of an agent’s task, and there is an element or probability involved. Some examples of stochastic situations include games of chance, such as the play of card games, such as poker, or games that involve the spin of a wheel, such as roulette.

    Competitive vs. Collaborative

    In types of environment in AI, another one will be classified under either the competitive or cooperative type, based on the nature of the relationships that the agents display, where the agents may be in direct competition with each other or cooperate towards the completion of a shared objective.

    A competitive environment is a situation where many agents are fighting to achieve different objectives. The performances of all the individual agents are dependent on bottlenecks and other agents, and the agents have to cooperate and compete in an endeavor to execute their goals. One good example of a competitive environment is games such as chess.

    In a collaborative environment, there are a number of agents involved in some form of cooperative project. Agent success only comes with the success of the other agents, and to accomplish the goals and targets developed, the agents need to work together cohesively. Some of the works related to collaborative environments include factors such as search and rescue.

    Single-agent vs Multi-agent

    The environment in Artificial Intelligence may be categorized as either a single or multi-agent environment based on the number of entities that are in the environment.

    A single-agent environment refers to a system where an agent has to act to accomplish a certain task and has to take action alone. Some examples of single-agent environments include puzzles and maze games. The agent has to apply some relevant search algorithms or planning methods to find its way to the goal state.

    A multi-agent environment is a setting where the different agents engage in transactions and act upon the surroundings with a view to attaining personal or mutual objectives. Classical examples of multi-agent environments are multiplayer games and traffic simulations. Thus, to decide on the agent’s behavior, the agents must apply game theory or call on multi-agent reinforcement learning.

    Static vs Dynamic

    Another way of categorizing the environment in AI is through changing or non-changing environments, that is, the type of change it undergoes.

    A static environment is constant and unchanging. The environment’s state is immutable, and the agent’s activities won’t alter it. Examples of static environments are problems in a math class or puzzles like a Rubik’s cube. The agent could, for instance, use the search algorithm or a decision tree to improve the way the agent behaves.

    An environment that is always changing is said to be dynamic. The environment’s state is immutable, and the agent’s activities won’t alter it. Examples of dynamic environments are video games or robotics applications. Thus, the agent has to employ methods such as planning or reinforcement learning with the purpose of improving its activity in accordance with the new environment.

    Discrete vs Continuous

    The environment within Artificial Intelligence can be categorized into discrete or continuous based on the state and action space.

    The collection of all potential states that the environment may be in is known as the state space. For example, the state space in a chess game would be a collection of all possible places for pieces on the checkered board. In a robotic control job, the state space can contain data on the location or velocity of the robot and its surroundings, among others.

    The action space is the action set from which the agent selects the action to be taken in any of the states of the environment. For example, if one is in a game such as chess, the action space would be defined as the set of moves that one can make in the game. When performing a robotic control job, the action space may contain orders to alter the robot’s speed or direction.

    It is an environment where both the state and action spaces are countable or are always discrete. A discrete environment is where the section of space occupied by the participants of a game is strictly deterministic, for instance, a board game like chess or checkers. The agent’s decision can be made using some method, such as a search algorithm or a decision tree.

    On the other hand, in a continuous environment, both state and action spaces are constant and infinite. Continuous environments include robotics or control systems since the function up to the point of discontinuity is continuous.

    As seen above, the control of the state-transition model in a constant environment requires that the agent’s decision-making process is also continuous in the sense that the state and action spaces are continuous. It has to incorporate skills such as reinforcement learning or optimization so as to learn and adapt as well.

    Episodic vs Sequential

    In AI, based on the given work and the mapping between the agent’s action and environment, the environment can be episodic or sequential.

    An episodic environment describes a scenario in which an agent taking an action cannot change the future states of an environment. It is aimed at maximizing the immediate shifting received at the end of each episode by the agent as opposed to the N-step. Chess is one of the examples of games that are played in an episodic environment. The agent can also employ models such as the Monte Carlo method or Q-learning to ensure the best policy for each episode.

    However, in the sequential environment, the decisions of the agent impact the future states of the environment. Such an agent aims to finally achieve the highest total of the obtained rewards after several interactions. Sequential environments are, for instance, Robotics applications or video games.

    Due to the existence of a look-ahead, the agent has to employ dynamic planning methods such as dynamic programming or reinforcement learning in order to attain the best policy for many steps.

    Known vs Unknown

    The environment in artificial intelligence can be and is categorized on the basis of the amount of information about the environment, such as a known environment and an unknown environment.

    An environment is considered known when the agent is fully aware of its payoffs, transition functions, and environmental regulations. The agent is always fully aware of the set of actions open to it; the result of each action is fully predictable. Typically known environments include games such as chess or tic-tac-toe. In a known environment, the agent is able to combine appropriate strategies, such as search algorithms or decision trees, in its operation.

    An example of an unknown environment is one where the agent has no understanding or very little understanding of the rules of the environment, the state transition, and the rewards that may be expected from any action. The agent may not know the actions allowable in this particular state, or the result of the action may be unpredictable.

    It should be noted that unknown environments are, for example, exploration tasks or real-life applications. Techniques such as reinforcement learning or the exploration-exploitation dilemma must be applied to optimize the agent’s behavior when it is in an environment that it cannot identify.

    However, it is crucial to understand that the distinction between Known vs Unknown and fully observable vs partially observable environments is orthogonal. For instance, an environment can be recognized as a known but partially observable environment, or it can be identified as an unknown, fully observable environment.

    Turing Test and Environment Interaction

    The Turing Test is a test that measures an agent’s capability and efficiency in basing their actions on a normal human being. As stated earlier, this type of test is sensitive to the complexity of the environment, and the extent to which the agent can meet it is indicative of its adaptability in this environment.

    In basic settings, an agent is just able to follow specific procedures, and in an environment that is complex, unpredictable, and ethereal, the agent needs to be able to reason, solve problems, learn, and choose in the best interest.

    In fact, knowledge about the environment is one of the most important requirements for the design of agents to perform optimally in any given conditions.

  • Intelligent Agent in AI

    An AI system can be defined as the study of the rational agent and its environment. The agents sense the environment through sensors and act on their environment through actuators. An AI agent can have mental properties such as knowledge, belief, intention, etc.

    What is an Agent?

    An agent can be anything that perceiveits environment through sensors and act upon that environment through actuators. An Agent runs in the cycle of perceivingthinking, and acting. An agent can be:

    • Human-Agent: A human agent has eyes, ears, and other organs which work for sensors and hand, legs, vocal tract work for actuators.
    • Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for sensors and various motors for actuators.
    • Software Agent: Software agent can have keystrokes, file contents as sensory input and act on those inputs and display output on the screen.

    Hence the world around us is full of agents such as thermostat, cellphone, camera, and even we are also agents.

    Before moving forward, we should first know about sensors, effectors, and actuators.

    Sensor: Sensor is a device which detects the change in the environment and sends the information to other electronic devices. An agent observes its environment through sensors.

    Actuators: Actuators are the component of machines that converts energy into motion. The actuators are only responsible for moving and controlling a system. An actuator can be an electric motor, gears, rails, etc.

    Effectors: Effectors are the devices which affect the environment. Effectors can be legs, wheels, arms, fingers, wings, fins, and display screen.

    Agents in AI

    Intelligent Agents:

    An intelligent agent is an autonomous entity which act upon an environment using sensors and actuators for achieving goals. An intelligent agent may learn from the environment to achieve their goals. A thermostat is an example of an intelligent agent.

    Following are the main four rules for an AI agent:

    • Rule 1: An AI agent must have the ability to perceive the environment.
    • Rule 2: The observation must be used to make decisions.
    • Rule 3: Decision should result in an action.
    • Rule 4: The action taken by an AI agent must be a rational action.

    Rational Agent:

    A rational agent is an agent which has clear preference, models uncertainty, and acts in a way to maximize its performance measure with all possible actions.

    A rational agent is said to perform the right things. AI is about creating rational agents to use for game theory and decision theory for various real-world scenarios.

    For an AI agent, the rational action is most important because in AI reinforcement learning algorithm, for each best possible action, agent gets the positive reward and for each wrong action, an agent gets a negative reward.

    Note: Rational agents in AI are very similar to intelligent agents.

    Rationality:

    The rationality of an agent is measured by its performance measure. Rationality can be judged on the basis of following points:

    • Performance measure which defines the success criterion.
    • Agent prior knowledge of its environment.
    • Best possible actions that an agent can perform.
    • The sequence of percepts.

    Note: Rationality differs from Omniscience because an Omniscient agent knows the actual outcome of its action and act accordingly, which is not possible in reality.

    Structure of an AI Agent

    The task of AI is to design an agent program which implements the agent function. The structure of an intelligent agent is a combination of architecture and agent program. It can be viewed as:

    1. Agent = Architecture + Agent program  

    Following are the main three terms involved in the structure of an AI agent:

    Architecture: Architecture is machinery that an AI agent executes on.

    Agent Function: Agent function is used to map a percept to an action.

    1. f:P* → A  

    Agent program: Agent program is an implementation of agent function. An agent program executes on the physical architecture to produce function f.

    PEAS Representation

    PEAS is a type of model on which an AI agent works upon. When we define an AI agent or rational agent, then we can group its properties under PEAS representation model. It is made up of four words:

    • P: Performance measure
    • E: Environment
    • A: Actuators
    • S: Sensors

    Here performance measure is the objective for the success of an agent’s behavior.

    PEAS for self-driving cars:

    Agents in AI

    Let’s suppose a self-driving car then PEAS representation will be:

    Performance: Safety, time, legal drive, comfort

    Environment: Roads, other vehicles, road signs, pedestrian

    Actuators: Steering, accelerator, brake, signal, horn

    Sensors: Camera, GPS, speedometer, odometer, accelerometer, sonar.

    Example of Agents with their PEAS representation

    AgentPerformance measureEnvironmentActuatorsSensors
    1. Medical DiagnoseHealthy patientMinimized costPatientHospitalStaffTestsTreatmentsKeyboard
    (Entry of symptoms)
    2. Vacuum CleanerCleannessEfficiencyBattery lifeSecurityRoomTableWood floorCarpetVarious obstaclesWheelsBrushesVacuum ExtractorCameraDirt detection sensorCliff sensorBump SensorInfrared Wall Sensor
    3. Part -picking RobotPercentage of parts in correct bins.Conveyor belt with parts,BinsJointed ArmsHandCameraJoint angle sensors.
  • Types of Agents in AI 

    Any entity with sensors for sensing its environment and actuators for acting on its environment is an agent in the field of artificial intelligence (AI). Agents are a key component in the design of an intelligent system because agents have the potential to learn, adapt, make choices, and interact with the environment. AI agents are of many types based on their complexity, capabilities, and tasks; they can be as simple as a reflex agent or as sophisticated as a learning agent.

    Agents can be grouped into five classes based on their degree of perceived intelligence and capability. All these agents can improve their performance and generate better actions over time. These are given below:

    • Simple Reflex Agent
    • Model-based reflex agent
    • Goal-based agents
    • Utility-based agent
    • Learning agent

    1. Simple Reflex agent:

    The Simple reflex agents are the simplest agents. These agents make decisions on the basis of the current percepts and ignore the rest of the percept history. These agents only succeed in a fully observable environment. The Simple reflex agent does not consider any part of the percepts’ history during their decision and action process. The Simple reflex agent works on a Condition-action rule, which means it maps the current state to an action. Such as a Room Cleaner agent, it works only if there is dirt in the room.

    Problems with the simple reflex agent design approach:

    • They have very limited intelligence
    • They do not have knowledge of non-perceptual parts of the current state
    • Mostly too big to generate and to store.
    • Not adaptive to changes in the environment.
    Types of AI Agents

    Here is an implementation of a simple reflex agent.

    Code:

    import matplotlib.pyplot as plt  
    
    import matplotlib.patches as patches  
    
    from time import sleep  
    
      
    
    # Defining  the simple reflex agent rule  
    
    def agent_Simple_Reflex(loc, stat):  
    
        if stat == "Dirty":  
    
            return "Suck"  
    
        elif loc == "A":  
    
            return "Move Right"  
    
        elif loc == "B":  
    
            return "Move Left"  
    
      
    
    # Visualizing with the function  
    
    def environment_draw(loc, environment, step, act):  
    
        fig, ax = plt.subplots(figsize=(6, 3))  
    
      
    
        # Draw two rooms, A and B  
    
        ax.add_patch(patches.Rectangle((0, 0), 3, 3, fill=True, color='gray' if environment["A"] == "Dirty" else 'lightgreen'))  
    
        ax.text(1.5, 1.5, "A\n" + environment["A"], ha='center', va='center', fontsize=12)  
    
      
    
        ax.add_patch(patches.Rectangle((3, 0), 3, 3, fill=True, color='gray' if environment["B"] == "Dirty" else 'lightgreen'))  
    
        ax.text(4.5, 1.5, "B\n" + environment["B"], ha='center', va='center', fontsize=12)  
    
      
    
        # Drawing agent  
    
        X_agent = 1.5 if loc == "A" else 4.5  
    
        ax.plot(X_agent, 2.5, marker="o", markersize=20, color="blue")  
    
        ax.text(X_agent, 2.9, "🤖", ha='center', va='center', fontsize=14)  
    
      
    
        # Text info  
    
        ax.set_title(f"Step {step}: {act}")  
    
        ax.axis("off")  
    
        plt.pause(1)  
    
        plt.close()  
    
      
    
    # Simulating the  environment and visualizing  
    
    def run_vacuum_world_with_visualization(start_loc, environment):  
    
        loc = start_loc  
    
        for step in range(1, 6):  # Run for 5 steps  
    
            stat = environment[loc]  
    
            act = agent_Simple_Reflex(loc, stat)  
    
      
    
            # Visualization  
    
            environment_draw(loc, environment, step, act)  
    
      
    
            # Updating the environment  
    
            if act == "Suck":  
    
                environment[loc] = "Clean"  
    
            elif act == "Move Right":  
    
                loc = "B"  
    
            elif act == "Move Left":  
    
                loc = "A"  
    
      
    
    #environment setup  
    
    state_environment = {"A": "Dirty", "B": "Dirty"}  
    
    run_vacuum_world_with_visualization("A", state_environment.copy())  
    
    <textarea>

    Output:

    Types of AI Agents

    The agent cleaned the rooms based on the current perception without memory or planning, but failed to act optimally when both rooms were clean or when actions required memory.

    2. Model-based reflex agent

    • The Model-based agent can work in a partially observable environment, and track the situation.
    • A model-based agent has two important factors:
      • Model: It is knowledge about “how things happen in the world,” so it is called a Model-based agent.
      • Internal State: It is a representation of the current state based on percept history.
    • These agents have the model, “which is knowledge of the world” and based on the model they perform actions.
    • Updating the agent state requires information about:
      1. How the world evolves
      2. How the agent’s action affects the world.
    Types of AI Agents

    Here is an implementation of a model based reflex agent.

    Code:

    import matplotlib.pyplot as plt  
    
    import matplotlib.patches as patches  
    
      
    
    # Function of Model-based reflex agent   
    
    def model_based_reflex_agent(loc, percept_stat, model_internal):  
    
        # Updating the internal model with current percept  
    
        model_internal[loc] = percept_stat  
    
      
    
        # Decision rules  
    
        if percept_stat == "Dirty":  
    
            return "Suck"  
    
        elif model_internal["A"] == "Dirty":  
    
            return "Move Left"  
    
        elif model_internal["B"] == "Dirty":  
    
            return "Move Right"  
    
        else:  
    
            return "NoOp"  # Do nothing if everything is clean  
    
      
    
    # Visualizing with function  
    
    def environment_draw(loc, state_env, step, act):  
    
        fig, ax = plt.subplots(figsize=(6, 3))  
    
      
    
        # Room A  
    
        ax.add_patch(patches.Rectangle((0, 0), 3, 3, color='gray' if state_env["A"] == "Dirty" else 'lightgreen'))  
    
        ax.text(1.5, 1.5, "A\n" + state_env["A"], ha='center', va='center', fontsize=12)  
    
      
    
        # Room B  
    
        ax.add_patch(patches.Rectangle((3, 0), 3, 3, color='gray' if state_env["B"] == "Dirty" else 'lightgreen'))  
    
        ax.text(4.5, 1.5, "B\n" + state_env["B"], ha='center', va='center', fontsize=12)  
    
      
    
        # Agent  
    
        X_agent = 1.5 if loc == "A" else 4.5  
    
        ax.plot(X_agent, 2.5, marker="o", markersize=20, color="blue")  
    
        ax.text(X_agent, 2.9, "🤖", ha='center', va='center', fontsize=14)  
    
      
    
        ax.set_title(f"Step {step}: {act}")  
    
        ax.axis("off")  
    
        plt.pause(1)  
    
        plt.close()  
    
      
    
    # Running the model-based reflex agent simulation  
    
    def run_model_based_agent(start_loc, environment):  
    
        loc = start_loc  
    
        model_internal = {"A": "Unknown", "B": "Unknown"}  
    
      
    
        for step in range(1, 7):  # Run for 6 steps  
    
            stat_curr = environment[loc]  
    
            act = model_based_reflex_agent(loc, stat_curr, model_internal)  
    
      
    
            environment_draw(loc, environment, step, act)  
    
      
    
            # Update environment and loc  
    
            if act == "Suck":  
    
                environment[loc] = "Clean"  
    
            elif act == "Move Right":  
    
                loc = "B"  
    
            elif act == "Move Left":  
    
                loc = "A"  
    
            elif act == "NoOp":  
    
                break  # Stop when everything is clean  
    
      
    
    # Start the environment  
    
    state_environment = {"A": "Dirty", "B": "Dirty"}  
    
    run_model_based_agent("A", state_environment.copy())  
    
    <textarea>

    Output:

    Types of AI Agents

    The agent successfully cleaned both rooms using internal memory to track the environment state, allowing it to behave more intelligently than the simple reflex agent.

    3. Goal-based agents

    The knowledge of the current state environment is not always sufficient to decide for an agent to decide what to do. The agent needs to know its goal which describes desirable situations. Goal-based agents expand the capabilities of the model-based agent by having the “goal” information.

    They choose an action so that they can achieve the goal. These agents may have to consider a long sequence of possible actions before deciding whether the goal is achieved or not. Such considerations of different scenarios are called searching and planning, which makes an agent proactive.

    Types of AI Agents

    Here is an implementation of a Goal Based agent.

    Code:

    import matplotlib.pyplot as plt  
    
    import matplotlib.patches as patches  
    
      
    
    # Planning the logic for goal-based agent  
    
    def goal_based_agent(loc, world_state, goal):  
    
        # Plan: Clean both rooms  
    
        plan = []  
    
      
    
        # If current room is dirty, clean it  
    
        if world_state[loc] == "Dirty":  
    
            plan.append("Suck")  
    
        else:  
    
            # Check if the other room is dirty and go there  
    
            other = "B" if loc == "A" else "A"  
    
            if world_state[other] == "Dirty":  
    
                move_act = "Move Right" if loc == "A" else "Move Left"  
    
                plan.append(move_act)  
    
            else:  
    
                plan.append("NoOp")  # Do nothing if goal is achieved  
    
        return plan[0]  
    
      
    
    # Visualization function  
    
    def draw_vacuum_world(loc, world_state, step, act):  
    
        fig, ax = plt.subplots(figsize=(6, 3))  
    
      
    
        # Room A  
    
        ax.add_patch(patches.Rectangle((0, 0), 3, 3, color='gray' if world_state["A"] == "Dirty" else 'lightgreen'))  
    
        ax.text(1.5, 1.5, "A\n" + world_state["A"], ha='center', va='center', fontsize=12)  
    
      
    
        # Room B  
    
        ax.add_patch(patches.Rectangle((3, 0), 3, 3, color='gray' if world_state["B"] == "Dirty" else 'lightgreen'))  
    
        ax.text(4.5, 1.5, "B\n" + world_state["B"], ha='center', va='center', fontsize=12)  
    
      
    
        # Agent  
    
        X_agent = 1.5 if loc == "A" else 4.5  
    
        ax.plot(X_agent, 2.5, marker="o", markersize=20, color="blue")  
    
        ax.text(X_agent, 2.9, "🤖", ha='center', va='center', fontsize=14)  
    
      
    
        ax.set_title(f"Step {step}: {act}")  
    
        ax.axis("off")  
    
        plt.pause(1)  
    
        plt.close()  
    
      
    
    # Simulate the Goal-Based Agent  
    
    def run_goal_based_agent(start_loc, environment):  
    
        loc = start_loc  
    
        goal = {"A": "Clean", "B": "Clean"}  # Define goal  
    
      
    
        for step in range(1, 7):  
    
            stat_curr = environment[loc]  
    
            act = goal_based_agent(loc, environment, goal)  
    
      
    
            draw_vacuum_world(loc, environment, step, act)  
    
      
    
            # Update environment  
    
            if act == "Suck":  
    
                environment[loc] = "Clean"  
    
            elif act == "Move Right":  
    
                loc = "B"  
    
            elif act == "Move Left":  
    
                loc = "A"  
    
            elif act == "NoOp":  
    
                break  # Stop if goal is achieved  
    
      
    
    # Initial state  
    
    env_initironment = {"A": "Dirty", "B": "Dirty"}  
    
    run_goal_based_agent("A", env_initironment.copy())  
    
    <textarea>

    Output:

    Types of AI Agents

    By organizing its activities according to the current and goal states and halting when the goal was reached, the agent was able to accomplish its objective of cleaning both rooms.

    4. Utility-based agents

    These agents are similar to the goal-based agent but provide an extra component of utility measurement, which makes them different by providing a measure of success at a given state. Utility-based agents act based not only on goals but also the best way to achieve the goal.

    The Utility-based agent is useful when there are multiple possible alternatives, and an agent has to choose in order to perform the best action. The utility function maps each state to a real number to check how efficiently each action achieves the goals.

    Types of AI Agents

    Here is an implementation of a Utility Based agent.

    Code:

    # Importing Libraries  
    
    import matplotlib.pyplot as plt  
    
    import matplotlib.patches as patches  
    
      
    
    def utility_calcu(state_env, act):  
    
        utility = 0  
    
        if state_env["A"] == "Clean" and state_env["B"] == "Clean":  
    
            utility += 100  
    
        elif state_env["A"] == "Dirty" or state_env["B"] == "Dirty":  
    
            utility += 10  
    
      
    
        if act == "Suck":  
    
            utility += 0  
    
        elif act.startswith("Move"):  
    
            utility -= 5  
    
        elif act == "NoOp":  
    
            utility -= 1  
    
      
    
        return utility  
    
      
    
    # decision-making on the base of utility  
    
    def utility_based_agent(loc, state_env):  
    
        possible_acts = ["Suck", "Move Right", "Move Left", "NoOp"]  
    
        best_act = None  
    
        utility_best = float('-inf')  
    
      
    
        for act in possible_acts:  
    
            state_simulated = state_env.copy()  
    
            sloc_simulated = loc  
    
      
    
            # taking the effect of action  
    
            if act == "Suck":  
    
                state_simulated[sloc_simulated] = "Clean"  
    
            elif act == "Move Right":  
    
                sloc_simulated = "B"  
    
            elif act == "Move Left":  
    
                sloc_simulated = "A"  
    
      
    
            utility = utility_calcu(state_simulated, act)  
    
            if utility > utility_best:  
    
                utilityutility_best = utility  
    
                best_act = act  
    
      
    
        return best_act  
    
      
    
    # Visualizing with function  
    
    def environment_draw(loc, state_env, step, act):  
    
        fig, ax = plt.subplots(figsize=(6, 3))  
    
      
    
        # Drawing Room A  
    
        ax.add_patch(patches.Rectangle((0, 0), 3, 3, color='gray' if state_env["A"] == "Dirty" else 'lightgreen'))  
    
        ax.text(1.5, 1.5, "A\n" + state_env["A"], ha='center', va='center', fontsize=12)  
    
      
    
        # Drawing Room B  
    
        ax.add_patch(patches.Rectangle((3, 0), 3, 3, color='gray' if state_env["B"] == "Dirty" else 'lightgreen'))  
    
        ax.text(4.5, 1.5, "B\n" + state_env["B"], ha='center', va='center', fontsize=12)  
    
      
    
        # Drawing the Agent  
    
        X_agent = 1.5 if loc == "A" else 4.5  
    
        ax.plot(X_agent, 2.5, marker="o", markersize=20, color="blue")  
    
        ax.text(X_agent, 2.9, "🤖", ha='center', va='center', fontsize=14)  
    
      
    
        ax.set_title(f"Step {step}: {act}")  
    
        ax.axis("off")  
    
        plt.pause(1)  
    
        plt.close()  
    
      
    
    # Running the  utility-based simulation  
    
    def run_utility_agent(start_loc, state_env):  
    
        loc = start_loc  
    
        for step in range(1, 7):  
    
            stat_curr = state_env[loc]  
    
            act = utility_based_agent(loc, state_env)  
    
      
    
            environment_draw(loc, state_env, step, act)  
    
      
    
            # Performing the  action  
    
            if act == "Suck":  
    
                state_env[loc] = "Clean"  
    
            elif act == "Move Right":  
    
                loc = "B"  
    
            elif act == "Move Left":  
    
                loc = "A"  
    
            elif act == "NoOp":  
    
                break  # stop if action is do nothing  
    
      
    
    # Starting the  simulation  
    
    env_init = {"A": "Dirty", "B": "Dirty"}  
    
    run_utility_agent("A", env_init.copy())  
    
    <textarea>

    Output:

    Types of AI Agents

    By effectively balancing cleaning with minimal movement, the agent chose actions that maximized its utility and avoided pointless operations after utility was maximized.

    5. Learning Agents

    A learning agent in AI is a type of agent that can learn from its past experiences, or it has learning capabilities. It starts to act with basic knowledge and then is able to act and adapt automatically through learning. A learning agent has mainly four conceptual components, which are:

    1. Learning element: It is responsible for making improvements by learning from the environment
    2. Critic: Learning element takes feedback from the critic, which describes how well the agent is doing with respect to a fixed performance standard.
    3. Performance element: It is responsible for selecting external action
    4. Problem generator: This component is responsible for suggesting actions that will lead to new and informative experiences.
    5. Hence, learning agents are able to learn, analyze performance, and look for new ways to improve performance.
    Types of AI Agents

    Here is an implementation of a Learning agent.

    Code:

    import random  
    
    import matplotlib.pyplot as plt  
    
    import matplotlib.patches as patches  
    
      
    
    # Agent that learns cleaning strategy over time  
    
    class SmartVacuum:  
    
        def __init__(self):  
    
            # Stores learned value of each (room, state) pair  
    
            self.memory = {  
    
                ("A", "Dirty"): 10,  
    
                ("B", "Dirty"): 10,  
    
                ("A", "Clean"): 0,  
    
                ("B", "Clean"): 0  
    
            }  
    
            self.rate = 0.5    # How much new info changes memory  
    
            self.discount = 0.9  # Not used in this version  
    
      
    
        def decide(self, room, condition):  
    
            # Very basic policy: clean if dirty, else switch room  
    
            if condition == "Dirty":  
    
                return "Clean"  
    
            return "Go Right" if room == "A" else "Go Left"  
    
      
    
        def update_memory(self, room, condition, points):  
    
            state = (room, condition)  
    
            old_val = self.memory.get(state, 0)  
    
            self.memory[state] = old_val + self.rate * (points - old_val)  
    
      
    
    # Simple visual feedback  
    
    def draw_rooms(agent_spot, room_state, count, move):  
    
        fig, ax = plt.subplots(figsize=(6, 3))  
    
      
    
        # Show room A  
    
        ax.add_patch(patches.Rectangle((0, 0), 3, 3, color='gray' if room_state["A"] == "Dirty" else 'lightgreen'))  
    
        ax.text(1.5, 1.5, f"A\n{room_state['A']}", ha='center', va='center', fontsize=12)  
    
      
    
        # Show room B  
    
        ax.add_patch(patches.Rectangle((3, 0), 3, 3, color='gray' if room_state["B"] == "Dirty" else 'lightgreen'))  
    
        ax.text(4.5, 1.5, f"B\n{room_state['B']}", ha='center', va='center', fontsize=12)  
    
      
    
        # Draw agent  
    
        bot_x = 1.5 if agent_spot == "A" else 4.5  
    
        ax.plot(bot_x, 2.5, marker="o", markersize=20, color="navy")  
    
        ax.text(bot_x, 2.9, "🤖", ha='center', va='center', fontsize=14)  
    
      
    
        ax.set_title(f"Turn {count}: {move}")  
    
        ax.axis("off")  
    
        plt.pause(0.8)  
    
        plt.close()  
    
      
    
    # Running the agent through steps  
    
    def run_simulation():  
    
        # Start with random dirt in rooms  
    
        rooms = {  
    
            "A": random.choice(["Clean", "Dirty"]),  
    
            "B": random.choice(["Clean", "Dirty"])  
    
        }  
    
      
    
        bot_place = "A"  
    
        cleaner = SmartVacuum()  
    
      
    
        for i in range(1, 11):  
    
            condition = rooms[bot_place]  
    
            next_move = cleaner.decide(bot_place, condition)  
    
      
    
            draw_rooms(bot_place, rooms, i, next_move)  
    
      
    
            # Action results in reward or penalty  
    
            if next_move == "Clean":  
    
                reward = 10  
    
                rooms[bot_place] = "Clean"  
    
            elif "Go" in next_move:  
    
                reward = -1  
    
                bot_place = "B" if bot_place == "A" else "A"  
    
            else:  
    
                reward = 0  
    
      
    
            # Agent updates its value estimate  
    
            cleaner.update_memory(bot_place, rooms[bot_place], reward)  
    
      
    
    # Start the program  
    
    run_simulation()  
    
    <textarea>

    Output:

    Types of AI Agents
    Types of AI Agents

    Through experience-based behavior modification and reward-based learning, the agent gradually enhanced its performance, leading to more intelligent and effective cleaning.

  • What are AI Agents? Definition and Types

    Artificial Intelligence is defined as the study of rational agents. A rational agent may take the form of a person, firm, machine, or software to make decisions. It works with the best results after considering past and present perceptions (perceptual inputs of the agent at a given instance). An AI system is made up of an agent and its environment. Agents work in their environment, and the environment may include other agents.

    An agent is anything that can be viewed as:

    Attention readers! Don’t stop learning now. Catch up on all important DSA concepts and be industry-ready with the DSA Self-Pace Course at student-friendly prices. If you would like to attend live classes with experts, please check out DSA Live Classes for Working Professionals and live competitive programming for students.

    • Understanding your environment through sensorsand
    • Acting on that environment through actuators

    Note: Every agent can perceive its actions (but not always the effects)

    What is the composition for agents in Artificial Intelligence

    To understand the structure of Intelligent Agents, we must be familiar with the architecture and agent programs. Architecture is the machinery on which the agent executes. It is a device with sensors and actuators, for example, a robot car, a camera, a PC. An agent program is an implementation of an agent function. An agent function concept is a map from the sequence (the history of all that an agent has considered to date).

    agent = architecture + agent program

    Agent examples:

    A software agent has keystrokes, file contents, received network packages that act as sensors and are displayed on the screen, files, sent network packets to act as actuators.

    The human agent has eyes, ears, and other organs that act as sensors, and hands, feet, mouth, and other body parts act as actuators.

    A robotic agent consists of cameras and infrared range finders that act as sensors and various motors that act as actuators.

    What is the composition for agents in Artificial Intelligence

    Types of agents

    Agents can be divided into four classes based on their perceived intelligence and ability:

    • Simple reflex agent
    • Model-based reflex agent
    • Target-based agent
    • Utility-based agent
    • Learning agent

    Simple reflex agent:

    Simple reflex agents ignore the rest of the concept history and act only based on the current concept. Concept history is the history of all that an agent has believed to date. The agent function is based on the condition-action rule. A condition-action rule is a rule that maps a state, that is, a condition, to an action. If the condition is true, then action is taken; otherwise, not. This agent function succeeds only when the environment is fully observable. For simple reflex agents operating in a partially observable environment, infinite loops are often unavoidable. If the agent can randomize its actions, then it may be possible to avoid the infinite loop.

    The problems with simple reflex agents are:

    • Very limited intelligence.
    • There is no knowledge of the non-perceptual parts of the state.
    • It is usually too large to generate and store.
    • If a change occurs in the environment, the rules collection needs to be updated.
    What is the composition for agents in Artificial Intelligence

    Model-based reflex agents:

    It works by searching for a rule whose position matches the current state. A model-based agent can handle a partially observable environment using a model about the world. The agent has to keep track of the internal state, adjusted by each concept, depending on the concept history. The current state is stored inside the agent, which maintains some structure describing the part of the world that cannot be seen.

    Updating the state requires information about:

    • how the world develops independently of the agent, and
    • How the agent’s actions affect the world.
    What is the composition for agents in Artificial Intelligence

    Goal-based agents

    These types of agents make decisions based on how far they are currently from their goals (details of desired conditions). Their every action is aimed at reducing its distance from the target. This gives the agent a way to choose from a number of possibilities, leading to a target position. The knowledge supporting their decisions is clearly presented and can be modified, which makes these agents more flexible. They usually require discovery and planning. The behavior of a target-based agent can be easily changed.

    What is the composition for agents in Artificial Intelligence

    Utility-based agents

    The agents which are developed having their end uses as building blocks are called utility-based agents. When there are multiple possible alternatives, then to decide which one is best, utility-based agents are used. They choose actions based on a preference (utility) for each state. Sometimes achieving the desired goal is not enough. We may look for a quicker, safer, cheaper trip to reach a destination. Agent happiness should be taken into consideration.

    Utility describes how “happy” the agent is. Because of the uncertainty in the world, a utility agent chooses the action that maximizes the expected utility. A utility function maps a state onto a real number which describes the associated degree of happiness.

    What is the composition for agents in Artificial Intelligence

    Learning Agent:

    A learning agent in AI is the type of agent that can learn from its past experiences or it has learning capabilities. It starts to act with basic knowledge and then is able to act and adapt automatically through learning.A learning agent has mainly four conceptual components, which are:

    1. Learning element:It is responsible for making improvements by learning from the environment
    2. Critic: The learning element takes feedback from critics which describe how well the agent is doing with respect to a fixed performance standard.
    3. Performance element:It is responsible for selecting external action
    4. Problem Generator: This component is responsible for suggesting actions that will lead to new and informative experiences.
    What is the composition for agents in Artificial Intelligence

    The Nature of Environments

    Some programs operate in completely artificial environments that are limited to keyboard input, databases, computer file systems, and character output on the screen.

    In contrast, some software agents (software robots or softbots) exist in rich, unlimited softbot domains. The simulator has a very detailed, complex environment. The software agent needs to choose from a long range of tasks in real time.

    A softbot designed to scan the customer’s online preferences and show interesting items to the customer works in a real as well as an artificial environment.

    The most famous artificial environment is the Turing test environment, in which a real and other artificial agents are tested on an equal basis. This is a very challenging environment as it is extremely difficult for a software agent to perform side-by-side with a human.

    Turing Test

    The success of a system’s intelligent behavior can be measured with the Turing test.

    Two persons and a machine to be evaluated participate in the test. One of the two persons plays the role of the examiner. Each of them is sitting in different rooms. The examiner is unaware of who is a machine and who is a human. He inquires by typing the questions and sending them to both intelligences, for which he receives typed responses.

    The purpose of this test is to fool the tester. If the tester fails to determine the response of the machine from the human response, the machine is said to be intelligent.

    Properties of Environment

    The environment has multifold properties –

    • Discrete/continuous – if there are a limited number of distinct, clearly defined, environmental conditions, then the environment is discrete (for example, chess); Otherwise it is continuous (e.g., driving).
    • Observable/partly observable – if it is possible to determine the overall state of the environment from observable assumptions at each time point; otherwise it is only partially observable.
    • Static / Dynamic – If the environment does not change while the agent is working, it is static; Otherwise it is dynamic.
    • Single Agent / Multiple Agents – There may be other agents in the environment that may be the same or different types of agents.
    • Accessible / Inaccessible – If the agent’s sensory system can have access to the entire state of the environment, then the environment is accessible to that agent.
    • Deterministic / non-deterministic – if the next state of the environment is completely determined by the current state and actions of the agent, then the environment is deterministic; otherwise it is non-deterministic.
    • Episodic/Non-Episodic – In an episodic environment, each episode involves the agent understanding and then acting. The quality of its action depends only on the episode. Subsequent episodes do not depend on the actions of the previous episode. Episodic environments are much simpler because the agent does not need to think ahead.