Author: saqibkhan

  • History of Artificial Intelligence

    The word and technology are not new, and researchers are not entirely new to all the things artificial intelligence. You would think this technology is much older than you can imagine. In Ancient Greek and Egyptian myths, there are myths of Mechanical men. Here are some of the milestones of AI history that shaped the path of AI development between the present day and before:

    History of Artificial Intelligence

    Maturation of Artificial Intelligence (1943-1952)

    There was plenty of progress in the development of artificial intelligence (AI) between 1943 and 1952. During that period, AI went from being a theory to a reality as experiments and practical applications were created. However, during this period, some of the key events were:

    • The Year 1943: Warren McCulloch and Walter Pitts came up with the first work in 1943, which is now regarded as AI, and they presented a model of an artificial neuron.
    • The Year 1949: Donald Hebb had shown an updating rule for the modification of the connection strength between neurons. However, this rule is now known as Hebbian learning.
    • The Year 1950: ”In 1950 Irishman Alan Turing published ”Computing Machinery and Intelligence,” where he suggested a test.” The test might be used to test the machine’s ability to display intelligent behavior like that of human intelligence, a Turing test.
    • The Year 1951:Initially, an artificial neural network (ANN), named SNARC, was created by Marvin Minsky and Dean Edmonds. For a network of 40 neurons, they used 3,000 vacuum tubes.

    The birth of Artificial Intelligence (1952-1956)

    Between 1952 and 1956, AI came to life as an interesting research field. It was during this time that pioneers and forward thinkers got together and laid the groundwork for what would eventually shape into an illimitable technological realm. Among this era’s notable occurrences are the following:

    • Year 1952: The world’s first self-learning program for playing games was pioneered by Arthur Samuel of the Samuel Checkers Playing Program.
    • Year 1955: “One of the first artificial intelligence programs, “According to the 41st IEEE Record of the ‘Programmers First Artificial Intelligence Program’ that was ‘Logic Theorist,’ Allen Newell and Herbert A. Simon created it.”. Specifically, 38 of 52 Mathematics theorems have been proved, and new and more elegant proofs have been found for some theorems.
    • Year 1956: First, the word Artificial Intelligence appeared in the context of the Dartmouth Conference (USA) by American Computer scientist John McCarthy. And then, for the first time, AI was coined as an academic field.

    At this time, the invented high-level computer languages were such as FORTRAN, LISP, and COBOL. At that time, there was a great enthusiasm for AI.

    The Golden Years- Early Enthusiasm (1956-1974)

    The generally accepted period for artificial intelligence (AI) being in its Golden Age is from 1956 to 1974. This was an upswing of time for AI researchers and innovators; they had their enthusiasm and achieved tremendous advancements in this field. These are a few notable events of this era:

    • The Year 1958: At this time, the perceptron, one of the first artificial neural networks that could learn from data, was introduced by Frank Rosenblatt. Modern neural networks were built on this invention. However, John McCarthy also developed the Lisp programming language, which quickly gained the favour of the AI community, and it turned into a very popular language among developers.
    • The Year 1959: A seminal paper by Arthur Samuel, where he coined the term machine learning, and proposed that the computer would outperform their makers. Oliver Selfridge also had an important contribution to machine learning through his article titled ‘Pandemonium: A Paradigm for Learning’. The results of this work are embodied in a model that can self-improve and discover patterns in events more quickly.
    • The Year 1964: Daniel Bobrow, who is a doctoral candidate at MIT, saw it fit to create one of the early natural language processing (NLP) programs known as STUDENT, with the exact intent of solving algebra word problems.
    • The Year 1965: Dendral was devised as the initial expert system by Edward Feigenbaum, Bruce G. Buchanan, Joshua Lederberg, and Carl Djerassi. The role of the test pattern in organic chemistry was to bias online organic chemists towards recognising unfamiliar organic compounds.
    • The Year 1966: What the researchers aimed to develop were algorithms that could solve mathematical problems. ELIZA was the first chatbot that was created by Joseph Weizenbaum in 1966. Additionally, Stanford Research Institute developed the first mobile intelligent robot that incorporated AI, computer vision, navigation, and NLP: Shakey. Today’s self-driving cars and drones can be regarded as a precursor to this.
    • The Year 1968: SHRDLU was developed by Terry Winograd, a pioneering multimodal AI that can understand the world of blocks and follow the user’s instructions to manipulate and reason in that world.
    • The Year 1969: A learning algorithm, backpropagation, was developed by Yu-Chi Ho and Arthur Bryson of which would lead to the ability to create multilayer artificial neural networks. It was a very important advance over the perceptron and the foundation for deep learning. Marvin Minsky and Seymour Papert also authored ‘Perceptrons’, which laid the framework for basic neural networks and the limitations of this. The results of this publication effectively put a halt to neural network research and lent credibility once again to the symbolic AI point of view.
    • The Year 1972: WABOT-1 1 was the first intelligent humanoid robot built in Japan.
    • The Year 1973: As a result, the British government drastically cut its backing of AI research on publication of the report Artificial Intelligence: A General Survey by James Lighthill.

    The first AI winter (1974-1980)

    A tough period for artificial intelligence (AI) is called the initial AI winter, which is between 1974 to 1980. At this time, there was a significant cut in research funding and a feeling of AI letdown.

    • The first AI winter was from 1974 to 1980. AI winter is a term in computer science that means the period of severe shortage of funding from the government for AI research.
    • It was during AI winters that the interest in publicity on artificial intelligence dropped.

    A boom of AI (1980-1987)

    After 1980, AI entered a golden period, its renaissance, its crown years, which, following the First AI winter, was a period of new vitality. These were some noteworthy events at this time:

    • The first national conference of the American Association of Artificial Intelligence at Stanford University occurred in 1980.
    • The Year 1980: AI came back after winter with ‘Expert System.’ The decision-making ability of a human expert was emulated using expert systems that were programmed. Meanwhile, AI also started another resurgence with Symbolics Lisp machines brought into use in the commercial world. However, in subsequent years, the Lisp machine market crashed.
    • The Year 1981: Parallel computers for A.I. and other computational needs, based on architecture similar to that of current GPUs, were designed by Danny Hillis.
    • The Year 1984: The phrase “AI winter” was first coined by Marvin Minsky and Roger Schank, as they spoke at an Association for the Advancement of Artificial Intelligence gathering. In turn, they cautioned the business world not to build up the expectations too high for AI, as this would end in disillusionment and the overall demise of the industry, which, alas, is exactly what happened three years later.
    • The Year 1985: Bayesian network causal analysis was invented by Judea Pearl, who produced statistical methods for uncertainty in computer systems.

    The second AI winter (1987-1993)

    • The second AI Winter duration was from 1987 through to 1993.
    • The high cost and inefficiency of the results again put the investors and the government to a stop, of funding AI research. The cost of using such an expert system as XCON was very cheap.

    The emergence of intelligent agents (1993-2011)

    From 1993 to 2011, artificial intelligence (AI) did quite a lot, especially with the development of intelligent computer programs. During this era, AI professionals’ focus moved away from trying to imitate human intelligence to building pragmatic, ingenious software for specific tasks. Noteworthy occurrences at this period in time include:

    • The Year 1997: To date, in 1997, IBM’s Deep Blue won a historic victory against world chess champion Gary Kasparov, causing the first computer’s victory over a reigning world chess champion. In addition, the Long Short-Term Memory recurrent neural network was introduced by Sepp Hochreiter and Jürgen Schmidhuber, allowing entire sequences of data, such as speech or video, to be processed.
    • The Year 2002: The first home entry of AI was in Roomba, a vacuum cleaner.
    • The Year 2006: Until 2006, companies like Facebook, Twitter, and Netflix also incorporated the use of AI into their respective business.
    • The Year 2009: The paper Utilising Graphics Processors for Extensive Deep Unsupervised Learning was released by Rajat Raina, Anand Madhavan, and Andrew Ng to introduce the idea of GPUS being used to train large neural networks.
    • The Year 2011: The initial CNN that achieved ‘superhuman’ performance came from Dan Claudiu Ciresan. Ueli Meier, Jürgen Schmidhuber, and Jonathan Masci solved the German Traffic Sign Recognition competition as the winners. Additionally, Apple released Siri, a voice-activated personal assistant that can process commands and generate responses that will then take place.

    Deep Learning, Big Data, and Artificial General Intelligence (2011-present)

    Significant progress has been witnessed in the artificial intelligence (AI) domain from 2011 to the present. They are the result of combining deep learning, putting data to use, and the never-ending hunt for artificial general intelligence (AGI). Some notable occurrences of this timeframe are therefore as follows:

    • The Year 2011: In 2011, Watson was able to beat Jeopardy, a game show in which players were asked questions and riddles to answer. In other words, Watson was able to do things so quickly on the tricky questions, and it understood natural language so much already that it proved that it could.
    • The Year 2012: To the user, Google has been launched with the ‘Google Now’ feature, which is similar to the way of Android app feature to predict information. Geoffrey Hinton, Ilya Sutskever, Alex Krizhevsky, etc., also introduced a deep CNN structure that won the ImageNet challenge, and their research resurgence brought deep learning back to the forefront.
    • The Year 2013: Tianhe-2 system is the supercomputer in China that managed to double the world’s fastest supercomputer’s speed because it achieved a speed of 33.86 petaflops. The world’s fastest system, it continued for the third year in a row. Second, DeepMind released other deep reinforcement learning models, a CNN trained on multiple trials using rewards, and succeeded in some games, beating humans. However, in addition to LeMons, Word2vec, the tool introduced by Microsoft researcher Tomas Mikolov and his team, does this automatically as well.
    • The Year 2014: In the year 2014, Chatbot ‘Eugene Goostman’ won the infamous ‘Turing test’. Diederik Kingma and Max Welling were on variational autoencoders (VAEs) that do images, images of videos, or text, while Ian Goodfellow and his group were the first to use generative adversarial networks (GANs) to make images, make images, wolf grinning masks, and deepfakes. Furthermore, Facebook’s DeepFace deep learning based facial recognition system achieved similar (almost) human-level facial recognition (in images); they were able to recognise human faces in digital images with accuracy very similar to human beings.
    • The Year 2016: This brings back the previous epoch when Kasparov played against Deep Blue, nearly 20 years ago, when AlphaGo defeated the Go world champion Lee Sedol in Seoul, South Korea. The company also pilots a self-driving car program for a limited group of users in Pittsburgh for one of its test grounds.
    • The Year 2018: In fact, IBM’s ‘Project Debater’ debated with two masters on complex topics and was good, very good indeed.
    • Also known as Duplex, a Google AI program that booked phone hairdresser appointments tricked the lady on the other end of the line into thinking she was dealing with a machine.
    • The Year 2021: The problem, however, is that Dall-E is an openAI multimodal AI system that can create images from the textual prompts.
    • The Year 2022: However, last month, OpenAI launched GPT-3.5 LLM in a chat format – an alternative to presenting GPT-3.5 LLM.

    Today, AI has come to great heights, but not so long ago. The concepts of Deep learning, Big data, and Data science have peaked a boom, and we can say the concepts are at par with a boom! The companies that have become partners with AI to develop marvelous devices today include Google, Facebook, IBM, and Amazon. So Artificial Intelligence’s future is a high intelligence, which is an inspiring future.

    History of Artificial Intelligence

    Milestones in AI Development

    One of the interesting things about the history of Artificial Intelligence (AI) is that it has been rich with achievements that helped the technology to evolve into one of the most transformative technologies that we are working on in the present.

    Notable Achievements

    AlphaGo

    In 2016, a subsidiary of Google, Google’s DeepMind, which develops AlphaGo, pulled off a big achievement for artificial intelligence by telling the world champion Go player Lee Sedol that it had beaten him.

    • Given that a possible move in Go can be anything, and given the large number of possible moves, Go is a hard problem for computational systems to solve.
    • For instance, AlphaGo had to synthesize deep learning and reinforcement learning to begin to understand strategies that had never been observed by humans before.
    • AlphaGo’s success in doing all this was the gateway to the development of other technologies in other fields (healthcare, logistics, etc.).

    GPT Models (Generative Pre-trained Transformers)

    NLP was revolutionised when machines were able to understand and create human-like text powered by OpenAI’s GPT models.

    • GPT-2 (2019), GPT-3 (2020), then the subsequent iterations have been used to establish benchmarks for language generation tasks.
    • Since GPT models can be pre-trained on large datasets and then fine-tuned on specific tasks, they can be used in functions like chatbots, content creation, and assisting in coding processes.
    • AI is used in industries of customer services, education, and entertainment, and this also has some ethical requirements for AI’s AI-generated content.

    DALL-E

    DALL-E is also the second such generative AI for image synthesis that OpenAI developed.

    • Different capability: It can generate a high-quality image from a text description by utilising image generation and language understanding.
    • Between advertising, graphic design, and education, DALL-E is a swing of application areas and shows the growing homogenization of AI and the arts.
    • Under the broader implication of multimodal AI systems, DALL-E has a wider implication, which combines a variety of data for sophisticated tasks.

    Key Organisations and Conferences in AI Progress

    • DeepMind: They are pioneers in reinforcement learning and came up with AlphaGo.
    • OpenAI: It was focused on developing safe and general AI technologies like GPT and DALL-E.
    • Stanford AI Lab: This research centre has become a revolutionary AI research centre of its kind, which has also contributed to some major foundational AI technologies.

    Prominent Conferences

    • NeurIPS (Conference on Neural Information Processing Systems): The best conference for high tech and innovation in the field of advanced machine learning and deep learning.
    • AAAI (Association for the Advancement of Artificial Intelligence): Space for people to show the latest possible of AI and the newest areas of AI, pitching as well as discussing the ethical implications of AI.
    • IJCAI (International Joint Conference on Artificial Intelligence): It promotes cooperative work in multiple AI domains and cooperates in the international research communities.

    International Collaboration in AI Research

    It is important for the fast development of AI on the part of researchers and organisations as well and governments due to global cooperation.

    Collaborative Research Initiatives

    • Human Brain Project (EU): With models of cognition and learning inspired by neuroscience, it is intended to push the frontier of AIs.
    • Partnership on AI (Global): A collective of industry leaders and academia to first prevent the creation of AI without it being ethical and responsible.
    • AI for Good (UN): It wanted to employ AI in fighting against issues such as climate change, poverty, and healthcare.
  • Features of Artificial Intelligence

    Artificial intelligence is no longer about the renowned concepts of science fiction but a transformative power that changes industries’ faces, improves life at hand and pushes boundaries on the potential ability residing in technology. Originally rooted in complex algorithms and computational power, AI grew into multidimensionality in its capability as it could perform tasks that traditionally seemed to require human intelligence.

    From automation of routine tasks to deep data analytics, AI is redesigning the way we connect to technology and the functioning of businesses. The learnability of this technology over time, the capacity to adapt itself, and the path to continuous improvement make it central to modern technology. Understanding the core features that define AI is important as it percolates through every other sector, facilitating the full realisation of its potential and easily adapting to the changes it brings to life and workplaces.

    eatures of Artificial Intelligence

    Eliminate Dull and Boring Work

    This is where the Artificial Intelligence system can easily automate repetitive and monotonous tasks, completely removing human involvement in such work. This not only enhances productivity with accuracy but also frees humans to focus on more creative and complex problem-solving. Here is an in-depth look at how this is achieved with the help of AI:

    Automation of Repetitive Tasks

    AI systems are good at handling repetitive, routine tasks that normally take a long time and are prone to human error. These tasks range from entries of simple data to various complex processes within different industries.

    Entry and Management of Data

    The entry of data is the supreme example of one of the mundane tasks that AI can automate. AI-driven systems can put in efficiently, data with minimal errors. In contrast to humans, who can become fatigued and prone to error, AI can work around the clock to a high degree of accuracy, prerequisite for cases in which the integrity of data has to be guaranteed.

    For instance, empowered by AI, OCR technology scans and digitizes paper documents, thereby turning them into editable and searchable data formats. This procedure is automated, saving time but also reducing data entry errors.

    Improving Productivity and Accuracy

    AI greatly enhances productivity by way of automating tasks that are repetitive. What used to take hours now takes minutes, freeing the employee’s time to do tasks that really require human intelligence and creativity. This shift from mundane to strategic work probably results in higher job satisfaction and creates an environment within which innovation can take place in the workplace.

    Robotic Process Automation (RPA)

    Robotic Process Automation is a part of Artificial Intelligence that deals with process automation in organizations. RPA utilizes Artificial Intelligence and machine learning to tend to high-volume operations that are repetitive and typically require human intervention.

    Code Example

    An example of how to automate data to be entered is demonstrated here: a simple Python code using the openpyxl library that reads and writes Excel files:

    1. from openpyxl import load_workbook  
    2. workbook = load_workbook(‘sales_data.xlsx’)  
    3. sheet = workbook.active  
    4. # Example   
    5. data = [  
    6.     (‘Date’, ‘Product’, ‘Quantity’, ‘Price’),  
    7.     (‘2025-05-11’, ‘Laptop’, 7, 1200),  
    8.     (‘2025-05-12’, ‘Mouse’, 18, 25),  
    9.     (‘2025-05-13’, ‘Keyboard’, 12, 45),  
    10.     (‘2025-05-14’, ‘Monitor’, 9, 200)  
    11. ]  
    12. for row in data:  
    13.     sheet.append(row)  
    14. workbook.save(‘updated_sales_data.xlsx’)  
    15. print(“Sales data entry automated successfully!”)  

    Output:

    Sales data entry automated successfully!

    This simple code will allow you to automate the act of feeding data into an Excel file that would otherwise have to be done manually. This can be extended by AI-based systems to process a large amount of data, check for errors and so forth.

    Data Ingestion

    Ingestion happens to be among the most striking capabilities of AI systems, making them process vast amounts of data with speed and accuracy. Nowadays, every organization located within multiple industries is surrounded by vast amounts of information originating from various sources.

    The ability of AI to quickly digest, analyze, and interpret this data greatly enhances intuition by providing the insight required, hence making it easier to make data-driven decisions. Next is a complete analysis of the role and applications of data ingestion in AI, including:

    Speed and Efficiency in Data Processing

    AI systems are designed to process vast data sets, which are overwhelming and very time-consuming if done by human analysts. Basically, data ingestion involves the collection of large data volumes from several sources, transforming them into a format that is usable, and subsequently loading them into a database or data warehouse for analysis.

    Data Collection

    AI might draw from various sources, including databases, cloud storage, APIs, and real-time streams. As another example, in the financial sector, data could come from feeds concerning the stock market, financial reports, and social media trends. In the health industry, data can originate from EHRs, medical devices, and clinical trials.

    Data Transformation

    The collected data will then have to be cleaned and transformed into a form appropriate for analysis. Checking for duplicates, handling missing values, and standardizing formats are all quality control measures that AI algorithms can help automate to ensure the accuracy and consistency of the data. For instance, in marketing, data from several campaigns and channels may be standardized so as to have a unified view of every interaction with the customer.

    Data Loading

    The last step in the process of data ingestion is to load it into a database or data warehouse. AI can ensure all this happens efficiently so that storage and retrieval work effectively. For example, e-commerce companies may load data about customer transactions and browsing behavior into a central repository for analysis to extract insights into buyer preferences and purchase patterns.

    Analysis and Interpretation

    This will let AI machines ingested with data analyze and interpret data for meaningful insights. This is most needed in industries where the timely and correct analysis of data is the time-tested way of decision-making.

    Code Example

    For example, using the pandas library to read data from a CSV file, here’s what it looks like:

    1. import pandas as pd  
    2. data = pd.read_csv(‘weather_data.csv’)  
    3. print(“Dataset preview:”)  
    4. print(data.head())  
    5. filtered_data = data[data[‘condition’] == ‘Rainy’]  
    6. print(“\nFiltered data for ‘Rainy’ condition:”)  
    7. print(filtered_data)  

    Sample Input CSV (weather_data.csv)

    datetemperaturehumiditycondition
    2025-03-1123°C83%Sunny
    2025-03-1219°C91%Rainy
    2025-03-1323°C89%Cloudy
    2025-03-1418°C94%Rainy
    2025-03-1526°C72%Sunny

    Output:

    Dataset preview:
    date temperature humidity condition
    0 2025-03-11 23°C 85% Sunny
    1 2025-03-12 19°C 91% Rainy
    2 2025-03-13 23°C 89% Cloudy
    3 2025-03-14 18°C 94% Rainy
    4 2025-03-15 26°C 72% Sunny
    Filtered data for 'Rainy' condition:
    date temperature humidity condition
    1 2025-03-12 19°C 91% Rainy
    3 2025-03-14 18°C 94% Rainy
    

    In this case, the AI indicates and reads data from a CSV file based on some condition. Instead of using AI to swarm and adjust many views simply, this is just one method of AI being able to take in and analyse data to make decisions.

    Imitate Human Cognition

    One of the most astounding features of AI is the capability of machines to imitate the activity of the human brain. Advanced techniques, one of which is machine learning through neural networks, can enable AI systems to learn from data, identify patterns similar to those in the human brain, and, consequently, make decisions based on past experiences.

    This cognitive capacity makes AI capable of managing different complex tasks with very high accuracy, thus changing whole industries and, accordingly, affecting daily life. Let us look a little deeper at how AI is copying human cognition and take a look at the impact of the ability.

    Learning from Data

    At its core lies machine learning that enables AI’s cognitive capabilities; operating systems learn on their own without explicit programming from data. AI models can process huge volumes of data, understand patterns, and hence derive relationships from the data that enable them to make any prediction or decisions. For example, in language processing, AI systems analyze mounds of text to learn the finesses of the human language, enabling them to perform tasks such as translation and summarization with impressive accuracy.

    Pattern Recognition and Decision Making

    Pattern recognition drives AI’s cognitive functions. For example, AI systems in image analysis can locate entities, faces, scenes, and other objects in images by learning from images that have been annotated. Likewise, AI in speech recognition recognizes words and phrases from audio data.

    This pattern recognition allows AI to make educated decisions. For instance, self-driving cars are equipped with AI that recognizes traffic signals, pedestrians, and other vehicles to make driving decisions in real-time for safe and efficient operation.

    Natural Language Processing (NLP)

    One of the directions in AI research is Natural Language Processing, an AI domain dealing with the interaction of computers and human language. Understanding and generating human language empowers AI systems to converse, answer questions, and recommend entities.

    This is how virtual assistants like Siri, Alexa, or Google Assistant use NLP to understand voice commands and react accordingly. They are able to set reminders, play music, update on the weather, and even control smart devices at home-exhibiting some cognitive abilities of AI.

    Facial Recognition and Chatbots

    Among many changing features, Artificial Intelligence has brought into everyday life the fast diffusion, massive usage, and potential impact on various business verticals that facial recognition and chatbots enjoy. These two technologies, built atop cutting-edge AI algorithms, aim to deliver on tasks done manually, thereby enhancing security, customer experience, and, generally, user experience.

    Facial recognition is processed through sophisticated AI technology designed to identify and verify individuals against their unique facial features. There are a total of four stages involved: detection, alignment, feature extraction, and finally, matching.

    Detection

    The system will first detect the face either in an image or in any form of video frame. This stage usually involves machine learning models previously trained to recognize facial patterns.

    Alignment

    On detection of a face, it aligns the face to a standard format to ensure the facial features are in a consistent position and provide a basis for further feature analysis. This step is the more critical one for proper recognition, taking into account not only head pose but also variations in lighting.

    Feature Extraction

    The system then extracts some distinctive features from the face, like eyes’ distance, nose shape, and lips shape. After that, these features are converted to a mathematical representation, i.e., faceprint.

    Matching

    Match the faceprint previously extracted and described to a database of known faceprints. These are accomplished using advanced algorithms, much like the deep learning approach, utilizing convolutional neural networks that assure highly accurate matches.

    Code Example for Facial Recognition

    This is one way of putting it on how to do facial recognition using Python’s face_recognition library:

    1. import face_recognition  
    2. group_image = face_recognition.load_image_file(“group_photo.jpg”)  
    3. face_locations = face_recognition.face_locations(group_image)  
    4. print(f”Found {len(face_locations)} face(s) in this image.”)  

    Example Output:

    For a group_photo.jpg, if it is the group picture where three faces are visible through the picture, then the output would be:

    Found 3 face(s) in this image.

    As this example demonstrates, you can use the library with images containing more than one face. You can change ‘group_photo.jpg’ with a path to any image file to check this with other pictures.

    Chatbots

    Chatbots are AI-fueled virtual assistants constructed to carry on a conversation with humans. They decipher and respond to the user’s queries and interact in a conversational way, using natural language and features of machine learning. There are mainly two types of chatbots: Rule-based and AI-based.

    Rule-Based Chatbots

    These work with defined rules and scripts; they make simple inquiries and give specific responses to a keyword or words. The entire exercise of a rule-based chatbot is, therefore, efficient for simple operations operative with its predefined rules.

    AI-Based Chatbots

    Such chatbots work through an NLP and machine learning mechanism to comprehend context and intent out of user queries. These can go on to have more complicated dialogues, be well aware of the learning from interactions, and give more accurate and relevant responses.

    Code Example for Chatbots

    Here is a simple chatbot implemented in Python with the use of ChatterBot library:

    1. from chatterbot import ChatBot  
    2. from chatterbot.trainers import ChatterBotCorpusTrainer  
    3. chatbot = ChatBot(‘TechSupportBot’)  
    4. trainer = ChatterBotCorpusTrainer(chatbot)  
    5. trainer.train(“chatterbot.corpus.english”)  
    6. response = chatbot.get_response(“What is Python programming?”)  
    7. print(response)  

    Output:

    Python is a high-level, interpreted programming language known for its simplicity and versatility.

    Note: The output, including the exact output, may differ based on the version of the ChatterBot library used, and random responses may vary accordingly if the corpus is being dynamically generated.

    Deep Learning

    Artificial intelligence is a subset of deep learning that simulates the working of the human brain to extract massive amounts of data and pattern correlations. This is the technology that uses artificial neural networks (ANNs) as a solution to complex problems e.,g., to solve tasks in image recognition, natural language processing or autonomous systems.

    Deep Learning Models

    They include multiple layers of neurons that are responsible for processing certain features of the input data. The network can hierarchically learn a representation using input, hidden and output layers.

    Feature Extraction

    Deep learning automatically extracts the features from the raw data as opposed to traditional machine learning. It removes the need to do manual feature engineering, making it more efficient to learn new things.

    Scalability with Large Data Sets

    Deep learning enjoys working with large sizes of data, and the more data, the better; the performance improves with the increasing training dataset size. Its success in real-world applications is based on big data integration.

    High Accuracy and Precision

    Capable of offering the level of accuracy of humans when it comes to things like image recognition and speech synthesis. As such, the models are improved and more precise in predictions as they process more data.

    End-to-End Learning

    Allows to have direct input-to-output mapping in cases like converting speech to text or translating languages. Integrates all learning stages in a single model, which simplifies the workflows.

    Not Futuristic

    At no time in recent memory have words like Artificial Intelligence been used as often as the term we currently see. AI today has been inserted into almost every sector of people’s lives and now lies as the cornerstone of innovation in every single physical and digital space. AI has both practicality and immense potential to be used in solving real-world problems, and these applications are almost everywhere.

    Virtual Assistants

    Virtual Assistants such as Alexa, Siri, Google Assistant, etc., are AI-driven systems that provide an understanding of natural language, Reminders, Questions, and control smart devices etc.

    Predictive Analytics

    AI is used by businesses to run predictive analytics such as analysing trends, future customers’ behaviour and their supply chain optimisation.

    Diagnosis and Imaging

    The use of AI algorithms for early diagnosis is done very accurately by analyzing the medical images (e.g., X-rays, MRI).

    Virtual Health Assistants

    Health advice, medication reminders, as well as mental health support are given by applications.

    Autonomous Vehicles

    Self-driving cars are made with the power of AI systems for safer driving and lower human error.

    Traffic Management

    Public transportation is optimised by real-time monitoring of traffic, which aids in the reduction of congestion.

    Prevent Natural Disasters

    Natural disasters, including earthquakes, hurricanes, floods and wildfires, are a major safety threat to human life and the environment. Artificial Intelligence (AI) can provide new ways of minimizing these threats using advanced Artificial Intelligence (AI) capabilities. Natural disasters are known for their costly impact, which can contribute to significant changes to both society and the economy that are just as destructive as the disaster itself.

    Machine Learning Models

    These models take historical and live data and make predictions for Earthquakes, Hurricanes and floods. For instance, A rainfall and topographic data-based flood risk prediction. Using the data for forecasts for hurricanes, including an analysis of atmospheric pressure and wind speed.

    Computer Vision

    Wildfire spread and disaster damage can be monitored by AI-driven drones equipped with computer vision.

    Satellite Imaging

    Instead of using that imagery to recognize patterns of forest deforestation, soil erosion or glacier melting that could cause disasters, AI processes satellite images.

    Urban and Infrastructure Planning

    The design of disaster-resistant infrastructure is automated using AI such that the occurrence of earthquakes or floods impacts it as minimally as possible.

    Climate Change Analysis

    By doing this, AI identifies long-term patterns in climate data that allow it to pinpoint the root causes of global warming and deforestation.

  • Applications of Artificial Intelligence

    Artificial Intelligence is a rapidly growing technology that is penetrating the contemporary world through solutions to numerous problems within different sectors. Taking health care, education, finance, entertainment, and agriculture as some of the sample sectors, there is significant improvement in efficiency, accuracy, and convenience from the general application of artificial intelligence. It replicates human thinking abilities and can solve problems concerning learning, reasoning, and decision-making much more proficiently than a human being.

    From helping cars to drive themselves, enhancing the experience a user gets on an interface, or helping researchers in their quest for new knowledge about our universe, AI is asserting itself in the age of smart systems. Thus, it is significant to learn more and use this impactful technology correctly.

    AI Applications

    The following are some sectors that apply artificial intelligence.

    Applications of AI

    AI in Astronomy

    • Automated Celestial Object Identification: Stars, Galaxies, and space phenomena Identification from telescope image data with a high level of accuracy. This speaks to the discovery of new phenomena and datasets that reduce the role of astronomers in making observations on what they find fascinating.
    • Exoplanet Hunting: It uses the variation in the brightness of stars to look for drops that suggest the presence of planets orbiting the stars, assisting scientists in finding new exoplanets.
    • Analyzing Space Information: AI receives large data feeds from telescopes and makes patterns or irregularities that help in discoveries, such as the confirmation of dark matter or black holes.
    • Real-time Monitoring of Space Events: AI systems are used to constantly survey space and look for such events as supernovae to alert astronomers.
    • Intelligent Telescope Control: AI sets telescope characteristics according to the actual climate or other conditions or depending on the type of observations that are to be conducted to optimize the images and timeliness of receiving them.

    AI in Healthcare

    Applications of AI
    • Medical Imaging Analysis: Diagnosing patients’ condition, interpreting X-rays, MRI or CT scans to identify cancerous tissue, breakages, internal bleeding & more.
    • Predictive Diagnostics: Patients’ history and habits are used to anticipate diseases and conditions before occurrence, and this helps in the diagnosis of diseases such as heart ailments or cancer.
    • Drug Discovery & Development: Here, the software mimics the molecular behaviors to screen and select potent drug molecules in a shorter period and with less investment than the corresponding costs involved in the conventional approach.
    • Personalized Medicine: AI takes into consideration the patient’s genes and medical history to ensure the best results with minimal side effects.
    • Operational Efficiency in Hospitals: AI enhances the usage of resources, patient traffic, and staff timetables, as well as limits the wait time for patients.

    AI in Gaming

    Applications of AI
    • Smart NPC Behavior: AI makes non-player characters react and adapt dynamically, providing a challenging and immersive gaming experience.
    • Procedural content generation: Another area of AI application in games is procedural content generation since it is a time-saving invention that offers a wide variety of game environments, levels, and quests for the players.
    • Realistic Graphics and Physics: AI improves the graphics, which makes characters and objects look almost real, as well as making the physical aspects of the game real.

    AI in Finance

    Applications of AI
    • Fraud Detection: Through machine learning, AI can analyze and quickly distinguish fraudulent transactions based on previous instances, lowering the occurrence of fraud.
    • Algorithmic Trading: Accurate computer-driven buying and selling of stocks in the market with high rates of gains, which determines the best time to buy and sell depending on the current data and models.
    • Credit Risk Assessment: The AI here looks at how a borrower handles their credit; this enables the lenders to make informed decisions.

    AI in Data Security

    Applications of AI
    • Anomaly detection: Anomaly detection deals with mining the traffic and identifying exceptions, which may indicate potentially problematic issues such as breaches or malware.
    • Risk Estimation: Based on the past records of attacks, the AI determines the possible threats in the future for which preparations can be made beforehand.
    • Automated Response Systems: When an invasion is identified, it is possible for an AI to shut down the affected computers or launch a counterattack immediately.

    AI in Social Media

    Applications of AI
    • Content Recommendations: It follows the behavior and habits of its users and makes suggestions for posts, videos, or advertisements that entice the users.
    • AI Chatbots: These bots automate customer service and avoid delays by relying on customers’ messages or comments.
    • Sentiment Analysis: AI derives the attitude of users on topics or products and services and comments, which helps in business decisions.
    • Trend Identification: AI provides brands with details of topics to look out for concerning social media content or which content is likely to go viral.

    AI in Travel & Transport

    • Route Planning: AI determines effective and efficient routes across junctions and highways based on real-time occurrence and avoids unnecessary time wastages and fuel costs.
    • Security Screening: Intelligent scanning tools that assist in scanning eliminate threats at airports while cutting through the time it takes for security checks.
    • Virtual Travel Assistants: AI takes the form of agents that can help with bookings, provide suggestions, or answer inquiries, thus increasing comfort.
    • Vehicle and Infrastructure Management: By using AI to foresee when vehicles or infrastructures need to be repaired, there is less risk of breakdowns and accidents with preserved safety.

    AI in Automotive Industry

    Applications of AI
    • Autonomous Vehicles: In this case, AI allows the cars to sense the environment and control them by themselves.
    • Driver Assistance Systems (ADAS): Some smart features include the ability to identify lanes, maintain lane position, use adaptive cruise control, and automatically brake in case there is an obstacle.
    • Manufacturing Automation: AI checks for defects on production lines, controls inventories and increases the efficiency of the manufacturing factory.
    • Voice Control: AI ensures that drivers do not need to use their hands to operate the car’s navigation system, make phone calls, and control the media since this can distract their attention.

    AI in Robotics

    • Autonomous Navigation: Robots have the ability to move around and operate and work independently, for instance, in the warehouse or disaster-affected regions.
    • Object Manipulation: AI allows robots to identify, pick, and appropriately interact with various objects for applications such as logistics and production.
    • Human-Robot Collaboration: AI enhances the capability of the robot to be used with ease by people with the aim of assisting humans in the completion of tasks without affecting their safety at the workplace.

    AI in Entertainment

    Applications of AI
    • Personalized Content: Movies, series or music suggestions depending on the user’s preferences increase the level of satisfaction among the audience.
    • Creative Tools: One of the most important creative use cases of AI is that AI provides the tools to support artists as co-creators in the production of pieces such as music, artwork, or videos.
    • Interactive Live Shows: Through AI, real-time translations on stage shows and the overall impact that is exhibited during the show.

    AI in Agriculture

    Applications of AI
    • Crop Monitoring: Drones and sensors help the AI system detect crop health, moisture, and pests to enable timely action.
    • Precision Agriculture: AI knows how much water, fertilizer, or pesticide is best to apply to a certain area in order to have maximum return without wasting resources.
    • Automated Equipment: In the system used for planting, applying chemicals, and even reaping, machines are driven by AI technology; thus, the cost of labor is high.
    • Livestock Tracking: The AI system is used in animals to help farmers note the health or any abusive behavior associated with the stock.

    AI in E-commerce

    • Product Recommendations: By using product recommendations, the customer is offered products related to their interests, thus making it easy for sales to be made and the customer satisfied with the recommended products.
    • Inventory Management: AI anticipates customer demand and can automatically reorder to replenish stocks or else order too many products that are not in high demand.
    • Dynamic Pricing: Prices are changed periodically according to factors influencing them, such as market trends, competitor prices, and customer demand, to extract the maximum amount of money.

    AI in Education

    Applications of AI
    • Automated Content Generation: With the help of Artificial intelligence, teachers can easily present quizzes, notes, and lesson plans so they do not waste time on manual typing, and the quality of the content is higher.
    • Virtual Tutors: It is a feature that uses artificial intelligence and is available all the time to assist the learners with questions and guidelines.
    • Instant Feedback: Since it can grade assignments and tests over the shortest time possible, it eases the process of checking on student’s performance.
    • Personalized Learning Paths: Through the help of an Artificial Intelligence application, students are provided with materials suiting their student’s abilities and difficulties.
  • Artificial Intelligence (AI) Tutorial

    Artificial Intelligence (AI) can be formulated as an emerging branch of computer science concerned with the creation of actual intelligent systems that mimic different human competencies. Offensive uses include voice assistance, self-driving cars, recommendation systems and diagnoses. It encompasses elements of machine learning, data science, robotics and other areas to design systems that can learn, infer and act. That is why it is important to understand properly the meaning of the main concept of AI as well as the possibilities and outcomes to expect in practice.

    AI Tutorial

    What is Artificial Intelligence (AI)?

    Artificial intelligence, as a computer science discipline, works to develop machines that execute duties that require human cognitive abilities. The human-related operations encompass learning combined with reasoning alongside problem-solving and perception and decision-making paths.

    Introduction to AI

    AI merges the words “Artificial”, describing human-made components, and “Intelligence”, referring to thinking capabilities to generate machines that emulate human thought processes.

    Definition:

    “It is a branch of computer science by which we can create intelligent machines that can behave like a human, think like humans, and be able to make decisions.”

    Artificial intelligence is the ability of a computer to learn, reason, and solve problems like a human being.

    Artificial intelligence is remarkable because it allows you to design a computer with preprogrammed algorithms that can operate with your intellect without requiring you to preprogram it to accomplish any tasks.

    Why Artificial Intelligence?

    AI is important because:

    • It solves real-world problems in areas like healthcare, marketing, and traffic management.
    • It helps you create your virtual assistant, like Cortana, Google Assistant, Siri, etc.
    • It allows robots to work in conditions that may be either hazardous to human life or are impossible for a human being to access.
    • It promotes creativity and creates a number of opportunities for further development of technology and its usage.

    History of AI

    • It is very interesting to know that the concept of intelligent machines has existed even in ancient civilizations, myths, and structures like the Egyptian pyramids. Information about symbolic reasoning was researched by philosophers Aristotle and Ramon Llull.
    • In the 1800s-1900s, Charles Babbage and Ada Lovelace introduced the concept of utilizing programmable machines. In the period of 1940s, John Von Neumann invented stored-program computers and McCullochs& Pitts introduced ideas of neural networks.
    • After the Second World War, particularly in the 1950s, Alan Turing came up with the Turing Test. The term ‘AI’ was first used in 1956 at Dartmouth College, and the first AI system was known as the logic theorist.

    What Comprises of Artificial Intelligence?

    AI is actually not limited to computer science; it includes several domains that mimic human intelligence. Intelligence includes reasoning, learning, problem-solving, perception and language understanding.

    To achieve this, AI leverages on a number of fields, such as:

    • Mathematics
    • Biology
    • Psychology
    • Sociology
    • Computer Science
    • Neuroscience
    • Statistics
    Introduction to AI

    These fields work together to develop intelligent systems capable of human-like behaviour.

    Types of Artificial Intelligence

    Artificial Intelligence is divided into different types, mostly determined by two key factors: capabilities and functionality.

    AI Type 1: Based on Capabilities

    1. Weak AI or Narrow AI: This type of AI can be used to solve certain problems and focus on a particular kind of job. It is only effective when it is used in a specific area and does not produce the same results when applied in other areas. It applies to smart products, known as virtual assistants such as Siri, systems engaged in image recognition, and IBM‘s Watson.
    2. General AI: General AI, also known as Strong AI, on the other hand, refers to machines capable of achieving any action that a man is capable of accomplishing. It is planning to attain human-like features such as intelligence characteristics like reasoning and learning processes. This is another type of AI that is still under research and has not been developed to its realization.
    3. Super AI: An advanced artificial intelligence in which every domain is superior to that of humans in terms of their decision-making power, problem-solving skills, learning capabilities, as well as their feelings and emotions. It is the final stage of AI development, and it does not currently exist in the world.

    AI Type 2: Based on Functionality

    1. Reactive Machines: This type of AI processes the current input data and does not have any previous experience. They follow pre-defined rules. Some of the most widely known examples include IBM’s chess machine known as Deep Blue and the Go-playing computer termed Google’s AlphaGo.
    2. Limited Memory: Many of them use the earlier information to establish something for a limited period. Some of the concrete samples include self-driving cars that follow other vehicles, the speed, and the road condition of the environment.
    3. Theory of Mind: The purpose of this AI is to comprehend the feelings, desires or even gestures of people. As previously stated, it is still part of the theoretical research and has not been fully realized.
    4. Self-Awareness: The final type of artificial intelligence that remains at the level of theory is even more superior to human intelligence as it would have consciousness and feelings. People would consider this level of AI as a significant level of advancement in technology as well as in knowledge.

    Advantages of AI

    The following are some main advantages of Artificial Intelligence:

    • High Accuracy with less error: AI machines or systems have a low incidence of error and are highly accurate because they make their decisions based on experience or knowledge.
    • High-Speed: AI systems are able to make decisions quickly and with extreme speed; as a result, they are able to defeat a chess champion in a chess game.
    • High reliability: AI systems are incredibly dependable and capable of accurately repeating the same task over and again.
    • Beneficial for hazardous environments: AI devices can be useful in dangerous environments where utilizing humans might be harmful, such as defusing a bomb or researching the ocean floor.
    • Digital Assistant: AI has a number of applications, for instance, in the current generation of E-commerce websites where AI technology can be used to show products in accordance with consumers’ demands.
    • Useful as a public utility: Artificial intelligence (AI) has the potential to be highly helpful for public utilities like self-driving cars, which can make our travels safer and less complicated, face recognition for security, natural language processing to speak to people in their native tongue, etc.
    • Enhanced Security: AI can indeed be very beneficial in improving security issues because of its ability to scan security threats when they are happening and counteract them to prevent affecting the firm and organization’s information and machinery.
    • Aid in Research: AI is useful to the research process as it helps researchers analyze large data sets in areas such as astronomy, genomics, and materials science in a timely manner.

    Disadvantages of AI

    There are drawbacks to any technology, including artificial intelligence. The drawbacks of AI are as follows:

    • Expensive: Since the AI requires regular maintenance to adjust to modern standards, the hardware and software costs are relatively high.
    • Unable to think creatively: However, to this date, robots cannot be said to possess creativity because their operations are limited to specific instructions and programs given to them.
    • No feelings or emotions: These robots can be incredible performers, but one thing they don’t possess is feelings, which are essential for the formation of friendly relationships with humans. Consequently, there is a probability that such users may be unsafe if not provided adequate care.
    • Increased reliance on machines: In today’s society, one can observe that people’s minds are gradually failing due to their tight connection with devices.
    • Lack of Original Creativity: Nevertheless, although the growth rate within humans is amazing or even inspiring, artificial intelligence computers can hardly be compared to human intelligence in terms of creativity and inventiveness.
    • Complexity: The creation and sustained operation of artificial intelligence may be quite difficult and require certain skills. For this reason, some persons or organizations may have it hard in being able to employ them as a result.
    • Job Concerns: This means that it will not stop at replacing basic professions only; it may also impose on specific skilled professions. It is for this reason that many people in a number of parts are anxious about losing their jobs due to this.

    Challenges of AI

    AI has several benefits, but it also has some challenges that must be solved:

    • Doing the Right Thing: AI has to make the right decisions, but sometimes it does not do that. It can be wrong or perform acts that are undesirable or not objectively right. There is a need to improve the decision-making ability of artificial intelligence and increase the ‘good choice’ factor of artificial intelligence.
    • Government and AI: Sometimes governments employ AI surveillance on people. This can threaten the concept of freedom; therefore, we have to ensure that they include the aspects of artificial intelligence in a good manner.
    • Bias in AI: Sometimes, AI seems to be partial, for instance, when identifying the facial features of different people. This is rather disadvantageous, speaking of which this affects individuals who are not ‘like most people’.
    • AI and Social Media: Social media feeds are controlled by AI. However, sometimes, it reveals some probably false or even a little cruel information. It is important for ‘AI’ to show the right things.
    • Legal and Regulatory Challenges: With the advancement of AI, there is inadequate legislative and regulatory law to cover most of the issues that surround AI, such as accountability and responsibility.

    AI Tools and Services

    AI tools and services for various applications are developing rapidly, and this development has some roots in 2012, which is related to the appearance of the AlexNet neural network. This made a new epoch of high-performance AI possible by the utilization of GPUs and large data sets. This highlighted the largest change in training neural networks with large quantities of data on multiple GPUs at once, which became more efficient.

    • Transformers: Google used a large number of standard computers with specialized processors called GPUs to develop AI more effectively. Transformers were made feasible by this discovery. Transformers enable AI to learn from unlabelled data, much like a computer learning to comprehend English.
    • Hardware Advancements: Businesses such as Nvidia enhanced these GPUs’ internal mechanisms. They improved their ability to handle the mathematical tasks that AI must perform. AI became a million times better thanks to the collaboration of computer data centres, smarter AI software, and improved hardware! Nvidia is also collaborating with cloud services providers to ensure that others can apply this mighty AI without a problem.
    • GPTs: Earlier, if a company wanted to incorporate AI in its operations, it had to build it from the ground up, which was costly and would take a lot of time. These days, companies like OpenAI, Nvidia, Microsoft, and Google provide pre-trained AI models. The specific models can be fine-tuned on such tasks more efficiently and at a lower expense. This assists businesses in adopting AI at a faster pace and with fewer risks involved in the process.
    • AI in the Cloud: It is not always easy to use AI because it requires a lot of data processing in the cloud. Some of the largest cloud computing firms, such as Amazon, Google, Microsoft, IBM and Oracle, are helping to ease this problem. There, it offers AI services for the difficult components of the task, such as data preparation, training of models for AI and integrating AI into applications.
    • Advanced AI for Everyone: Some organizations develop excellent AI modes and publish them. For instance, OpenAI has models ranging from certain ones that are proficient in negotiating to others proficient in language comprehension, image creation, and even coding. The former is Nvidia, and the latter is not affiliated with a single cloud firm. Other people have come up with different ways of producing special models of AI for various occupations and professions. The English Club has been likened to a vast toolbox that contains a number of strong implements in a range of activities.

    Prerequisite

    Before studying artificial intelligence, you need to be familiar with the following basics to help you grasp the ideas:

    • Any computer language, including Python, Java, C, C++, etc. (but proficiency in Python will be helpful)
    • Understanding fundamental concepts in mathematics, including probability theory, derivatives, etc.

    Conclusion

    Artificial Intelligence (AI) today has become an integral part of society that affects functionality based on technical support. AI impacts are present in all spheres of human lives, including healthcare, education, transportation, and entertainment. The level of AI integration that exists is the kind of application that has the hope of assisting in addressing major problems as well as enhancing people’s performance.

    However, it also raises some important social and ethical issues of job loss, privacy invasion, and responsibility. For enhanced and desirable results, especially in the improvement of human life, AI should be developed morally and ethically. By studying AI, one assumes the ability to live directly in a future that is expected to be characterized by artificial intelligence.

  • Migrating from JavaScript to TypeScript

    TypeScript is a superset of JavaScript and provides more features than JavaScript. The main purpose of using TypeScript in any project instead of JavaScript is to achieve type safety as TypeScript allows defining types for each variable, function parameters, function return type, etc.

    Furthermore, TypeScript also has features like static typing and supports classes, interfaces, modules, and other high-level features that are not supported by JavaScript. Here, you will learn to migrate your JavaScript code to TypeScript.

    Why Migrate from JavaScript to TypeScript?

    Here are a few reasons why anyone should use TypeScript over JavaScript:

    • Type Safety: TypeScripts core feature is its ability to perform static type checking.
    • Improved Code Quality: TypeScript can catch errors early in the development process, which can save costs and time in software development.
    • Advanced features: TypeScript supports advanced features like interfaces, etc. which are not supported by JavaScript.
    • Scalability: Easier to manage and scale large codebases with TypeScript.

    Steps to Migrate from JavaScript to TypeScript

    You can follow the below steps to migrate your JavaScript code to TypeScript:

    Pre-requisite

    • You should have a JavaScript file containing the JavaScript code.

    Step 1: Setting Up Your Environment

    If you havent installed TypeScript, you can execute the below command in the terminal to install TypeScript:

    npm install -g typescript

    Step 2: Add tsconfig.json File in the Project

    The main part of converting the JavaScript project into the TypeScript project is adding the tsconfig.json file in the root directory of the project.

    The tsconfig.json file contains a single JSON object containing various properties. It defines the configuration to compile the TypeScript code into plain JavaScript code.

    You can create a new tsconfig.json file and add the below code to that:

    {"compileOnSave":true,"compilerOptions":{"target":"es6","lib":["es6","dom"],"module":"commonjs",}"include":["src/**/*"],}

    However, you can also remove some properties or add them in the tsconfig.json file according to your requirements.

    Step 3: Convert JavaScript Files to TypeScript

    Now you are ready to use TypeScript files. TypeScript compiler compiles only TypeScript files (.ts and .tsx). Rename one of the .js files to .ts. If your file includes JSX, then rename it to .tsx. After renaming the file, you may notice that TypeScript files may contain some type errors. To handle type errors, we perform type annotations.

    Add Type Annotations

    After renaming the JavaScript files to TypeScript, we start adding type annotations to variables, function parameters, and return types. This will help the TypeScript compiler catch potential errors. Lets take an example.

    Suppose, we have the below code in the JavaScript file:

    functionadd(a, b){return a + b;}

    The above code contains the add() function which takes two variables a and b as parameters and returns the sum of them.

    To convert the above code to TypeScript, you need to add types for the function parameters and specify the return type for the function.

    functionadd(a:number, b:number):number{return a + b;}

    Here, we have used the number type for both parameters and the return type of the function.

    Step 4: Solve Errors and Install External Libraries

    After converting the JavaScript code into TypeScript, make sure to solve any errors given in the code editor. Otherwise, you will also get an error while compiling the TypeScript code into plain JavaScript code.

    If you are using external libraries in the JavaScript project, you can use the Node Package Manager (NPM) to install the libraries in the TypeScript project. If you dont install these external libraries, the compiler will throw an error.

    Step 5: Compile the TypeScript Code

    After solving the errors, execute the below command in the terminal to compile the TypeScript code:

    npx tsc filename

    In the above command, you can replace filename with the actual name of the TypeScript file.

    After executing the command, it will generate a JavaScript file with the same name as the TypeScript file. This JavaScript file can be used like a normal JavaScript file with HTML or executed using NodeJS.

    This lesson has explained the basic steps to convert your JavaScript project into TypeScript. However, with the help of these steps, you can also convert a complex JavaScript project into TypeScript.

  • tsconfig.json

    The TypeScript tsconfig.json file is a configuration file to specify the compiler options. These compiler options are used to compile the TypeScript code and convert it into JavaScript code. However, it also allows developers to specify some more configurations in the JSON format to use in the project.

    The tsconfig.json file is always present in the root directory of the project. This file contains the data in JSON format.

    Basic Structure of tsconfig.json file

    The tsconfig.json file mainly contains 5 properties given below and all are optional:

    • compileOnSave
    • compilerOptions
    • files
    • include
    • exclude

    If you dont use any of these properties in the tsconfig.json file, the compiler uses the default settings.

    Here is the basic structure of the tsconfig.json file.

    {"compileOnSave":true,"compilerOptions":{"target":"es6","lib":["es6","dom"],"module":"commonjs"},"files":["app.ts","base.ts"],"include":["src/**/*"],"exclude":["node_modules","src/**/*.calc.ts"]}

    In the above file, you can add more compiler options, or elements in the list of other properties according to the requirements.

    Lets understand each property one by one here.

    The compileOnSave Property

    The compilerOnSave property is used to specify whether you want to compile the project code immediately when you save the code. The default value of the compilerOnSave property is false.

    If you use the false value for this property, you need to compile the code manually.

    Here is how you can use the compileOnSave property in the tsconfig.json file.

    {"compileOnSave": boolean_value
    }

    The compilerOptions Property

    The compilerOptions is a widely used property in the tsconfig.json file. It is used to specify the settings for the TypeScript compiler to compile the code. For example, if you want to use a specific version of JavaScript or a module while compiling the TypeScript code, you can modify the compilerOptions property.

    Here are some common compiler options to use in the tsconfig.json file.

    OptionDescription
    targetSpecifies the target ECMAScript version for the output JavaScript files. “es6” targets ECMAScript 2015.
    experimentalDecoratorsEnables experimental support for ES decorators, which are a stage 2 proposal to the ECMAScript standard.
    libSpecifies a list of library files to be included in the compilation. For example, including “es6” and “dom” for relevant APIs.
    moduleSpecifies the module system for the project. “commonjs” is typically used for Node.js projects.
    esModuleInteropEnables compatibility with non-ES Module compliant imports, allowing default imports from modules with no default export.
    resolveJsonModuleAllows importing of .json files as modules in the project.
    strictEnables all strict type-checking options, improving the strictness and accuracy of type checks in TypeScript.
    listFilesWhen set, the compiler will print out a list of files that are part of the compilation.
    outDirRedirects output structure to the directory specified. Useful for placing compiled files in a specific directory.
    outFileConcatenates and emits output to a single file. If outFile is specified, outDir is ignored.
    rootDirSpecifies the root directory of input files. Useful for controlling the output directory structure with outDir.
    sourceRootSpecifies the location where the compiler should look for TypeScript files instead of the default location.
    allowJsAllows JavaScript files to be compiled along with TypeScript files. Useful in projects that mix JS and TS.
    strictNullChecksWhen enabled, the compiler will perform strict null checks on your code, which can help prevent null or undefined access errors.

    Here is the common way to use the compilerOptions in tsconfig.json file.

    {"compilerOptions":{"target":"es6","lib":["es6","dom"],"module":"commonjs"}}

    The files Property

    The files property takes the list of files as a value to include in the compilation process. You can add filenames directly if it is in the root directory, or the relative or absolute file path for each file, which you want to include in the compilation process.

    Here, we have shown how to use files properties in the tsconfig.json file.

    "files":["app.ts","base.ts"]

    The include Property

    The include property allows developers to add the list of TypeScript files for the compilation using the wildcard queries.

    If you want to add all files in the compilation, you can use the below wildcard query.

    "include":["src/**/*"]

    The above configuration adds all files, which are in the src directory.

    The exclude Property

    The exclude property is the opposite of the include property. It allows developers to remove particular files using the wildcard queries from the compilation process.

    "exclude":["node_modules","src/**/*.calc.ts"]

    The above configuration removes the node modules and calc.ts file from the compilation process.

    Common Scenarios and Configurations

    Here, we have explained the common scenarios for which developers are required to change the tsconfig.json file.

    Targeting Different ECMAScript Versions

    If you want to target different ECMAScript versions while compiling the TypeScript code, you can use the below configurations. Here, we have changed the value of the target and module properties of the compilerOptions object.

    {"compilerOptions":{"target":"es6","module":"es2015"}}

    Including Node.js Type Definitions

    When you work with Node.js, you might need to add a type definition, and here is how you can add it.

    {"compilerOptions":{"module":"commonjs","target":"es2018","lib":["es2018","dom"]},"include":["src/**/*"]}

    Excluding Test Files

    Sometimes, developers are required to remove the testing files from the compilation process. Here is how you can remove particular test files or directories using the exclude property.

    {"exclude":["**/*.spec.ts","**/*.test.ts"]}

    It is always important for TypeScript developers to understand managing the tsconfig.json file. You can edit this file to change the module while compiling the code, adding and removing files from the compilation process, and for automatic compilation after saving the file.

  • Boxing and Unboxing

    A value type in TypeScript is automatically converted into a reference type using a process known as boxing. In other words, boxing refers to transforming a value type into a reference type, and unboxing refers to transforming a reference type into a value type. These are two techniques used in TypeScript to convert a value type to an object type.

    Boxing is the process of wrapping a value type in an object type. In contrast, unboxing is the process of unwrapping an object type back to a value type. The two techniques improve code performance by reducing the amount of memory allocated each time a value type is cast to an object type.

    Boxing and Unboxing in TypeScript refer to the way primitive values are handled when they are passed to or returned from functions. When a primitive value is passed to a function, it is boxed, meaning it is converted to an object. When the value is returned from the function, the object is unboxed, and the primitive value is returned. This process is necessary because primitive values are not object-oriented and must be converted for a function to manipulate them. Boxing and unboxing can improve performance and memory usage in TypeScript applications.

    Let us explain both topics one by one in detail.

    Boxing in TypeScript

    Boxing in TypeScript refers to converting a value of a primitive data type (e.g., number, string, boolean) into an object of the corresponding wrapper class.

    TypeScript has built-in wrapper classes for the primitive data types, such as Number, String, and Boolean. These wrapper classes provide useful methods and properties that can be used to manipulate the corresponding primitive data types.

    For example, the Number wrapper class has methods such as toFixed(), toString(), and valueOf(). Boxing is an important concept in TypeScript, as it allows for using methods on primitive data types that would not otherwise be available.

    Syntax

    let variable_name: number =12345let boxing_variable_name: Object = variable_name // Boxing

    In the above syntax, we can see the value of variable_name variable of type number is converted to an object type variable in the process of boxing.

    Example

    In this example, we perform a boxing operation. We declare a class named BoxingClass and declare two variables. One is the number, and the other is an object-type variable. We declare a method named boxingMethod(), where we perform the boxing operations. And finally, we console log the my_object variables value.

    classBoxingClass{
       my_number: number =123
       my_object: Object
    
       boxingMethod(){this.my_object =this.my_number
    
      console.log('Boxing Occurs for my_object variable')}}let boxing_object =newBoxingClass()
    boxing_object.boxingMethod() console.log('my_object value: ', boxing_object.my_object)

    On compiling, it will generate the following JavaScript code

    var BoxingClass =/** @class */(function(){functionBoxingClass(){this.my_number =123;}
       BoxingClass.prototype.boxingMethod=function(){this.my_object =this.my_number;console.log('Boxing Occurs for my_object variable');};return BoxingClass;}());var boxing_object =newBoxingClass();
    boxing_object.boxingMethod();console.log('my_object value: ', boxing_object.my_object);

    Output

    The above code will produce the following output

    Boxing Occurs for my_object variable
    my_object value:  123
    

    Unboxing in TypeScript

    Unboxing in TypeScript converts a value with a compound data type (object, array, tuple, union, etc.) into a simpler data type (string, number, boolean, etc.). It is similar to unboxing in other programming languages, where a value of a particular type (like an object) is converted into a simpler type, such as a string or number.

    In TypeScript, unboxing is done using the type assertion syntax (angle brackets) to specify the type of the value to be unboxed. For example, if we have a value of type any, we can unbox it to a number type by using the following syntax: <number> value.

    Syntax

    let variable_name: number =12345let boxing_variable_name: Object = variable_name // Boxinglet unboxing_variable_name: number =<number>boxing_variable_name // Unboxing

    In the above syntax, we can see the value of variable_name variable of type number is converted to an object type variable in the process of boxing and then converted back to a number using unboxing.

    Example

    In this example, we perform both boxing and unboxing operations. We declare a class named BoxingUnboxingClass and declare three variables: two are the number, and another is an object type variable. Firstly, we perform the boxing process using the boxingMethod(), and then we perform the unboxing using the unboxingMethod(). And finally, we console log the variables value.

    classBoxingUnboxingClass{
       my_number: number =123
       boxing_variable: Object
       unboxing_variable: number
       boxingMethod(){this.boxing_variable =this.my_number
    
      console.log('Boxing Occurs!')}unboxingMethod(){this.unboxing_variable =&lt;number&gt;this.boxing_variable
      console.log('Unboxing Occurs!')}}let boxing_unboxing_object =newBoxingUnboxingClass()
    boxing_unboxing_object.boxingMethod() boxing_unboxing_object.unboxingMethod() console.log('boxing_variable value: ', boxing_unboxing_object.boxing_variable) console.log('unboxing_variable value: ', boxing_unboxing_object.unboxing_variable )

    On compiling, it will generate the following JavaScript code

    var BoxingUnboxingClass =/** @class */(function(){functionBoxingUnboxingClass(){this.my_number =123;}
       BoxingUnboxingClass.prototype.boxingMethod=function(){this.boxing_variable =this.my_number;console.log('Boxing Occurs!');};
       BoxingUnboxingClass.prototype.unboxingMethod=function(){this.unboxing_variable =this.boxing_variable;console.log('Unboxing Occurs!');};return BoxingUnboxingClass;}());var boxing_unboxing_object =newBoxingUnboxingClass();
    boxing_unboxing_object.boxingMethod();
    boxing_unboxing_object.unboxingMethod();console.log('boxing_variable value: ', boxing_unboxing_object.boxing_variable);console.log('unboxing_variable value: ', boxing_unboxing_object.unboxing_variable);

    Output

    The above code will produce the following output

    Boxing Occurs!
    Unboxing Occurs!
    boxing_variable value:  123
    unboxing_variable value:  123
    

    The boxing and unboxing in TypeScript refer to the way primitive values are handled when they are passed to or returned from functions. Boxing converts a primitive value type into an object type, while unboxing is the reverse process of converting an object type back into a primitive value type. These techniques improve code performance by reducing the amount of memory allocated each time a value type is cast to an object type.

    In TypeScript, boxing is done by assigning a primitive value to an object variable, and unboxing is done using type assertion syntax (angle brackets) to specify the type of the value to be unboxed. It is important to note that the primitive value’s memory is allocated on the stack, and the object value’s memory is allocated on the heap.

  • Utility Types

    TypeScript allows us to create a new type from the existing types, and we can use the utility types for such transformation.

    There are various utility types that exist in TypeScript, and we can use any utility type according to our requirements of the type transformation.

    Let’s discus the different utility types with examples in TypeScript.

    Partial Type in TypeScript

    The Partial utility type transforms all the properties of the current type to optional. The meaning of the partial is either all, some, or none. So, it makes all properties optional, and users can use it while refactoring the code with objects.

    Example

    In the example below, we have created the Type containing some optional properties. After that, we used the Partial utility type to create a partialType object. Users can see that we havent initialized all the properties of the partialType object, as all properties are optional.

    type Type ={
       prop1: string;
       prop2: string;
       prop3: number;
       prop4?: boolean;};let partialType: Partial<Type>={
       prop1:"Default",
       prop4:false,};
    
    console.log("The value of prop1 is "+ partialType.prop1);
    console.log("The value of prop2 is "+ partialType.prop2);

    On compiling, it will generate the following JavaScript code

    var partialType ={
       prop1:"Default",
       prop4:false};console.log("The value of prop1 is "+ partialType.prop1);console.log("The value of prop2 is "+ partialType.prop2);

    Output

    The above code will produce the following output

    The value of prop1 is Default
    The value of prop2 is undefined
    

    Required Type in TypeScript

    The Required utility type allows us to transform type in such a way that it makes all properties of the type required. When we use the Required utility type, it makes all optional properties to required properties.

    Example

    In this example, Type contains the prop3 optional property. After transforming the Type using the Required utility operator, prop3 also became required. If we do not assign any value to the prop3 while creating the object, it will generate a compilation error.

    type Type ={
       prop1: string;
       prop2: string;
       prop3?: number;};let requiredType: Required<Type>={
       prop1:"Default",
       prop2:"Hello",
       prop3:40,};
    console.log("The value of prop1 is "+ requiredType.prop1);
    console.log("The value of prop2 is "+ requiredType.prop2);

    On compiling, it will generate the following JavaScript code

    var requiredType ={
       prop1:"Default",
       prop2:"Hello",
       prop3:40};console.log("The value of prop1 is "+ requiredType.prop1);console.log("The value of prop2 is "+ requiredType.prop2);

    Output

    The above code will produce the following output

    The value of prop1 is Default
    The value of prop2 is Hello
    

    Pick Type in TypeScript

    The Pick utility type allows us to pick a type of properties of other types and create a new type. Users need to use the key of the types in the string format to pick the key with their type to include in the new type. Users should use the union operator if they want to pick multiple keys with their type.

    Example

    In the example below, we have picked the color and id properties from type1 and created the new type using the Pick utility operator. Users can see that when they try to access the size property of the newObj, it gives an error as a type of newObj object doesnt contain the size property.

    type type1 ={
       color: string;
       size: number;
       id: string;};let newObj: Pick<type1,"color"|"id">={
       color:"#00000",
       id:"5464fgfdr",};
    console.log(newObj.color);// This will generate a compilation error as a type of newObj doesn't contain the size property// console.log(newObj.size);

    On compiling, it will generate the following JavaScript code

    var newObj ={
       color:"#00000",
       id:"5464fgfdr"};console.log(newObj.color);// This will generate a compilation error as a type of newObj doesn't contain the size property// console.log(newObj.size);

    Output

    The above code will produce the following output

    #00000
    

    Omit Type in TypeScript

    The Omit removes the keys from the type and creates a new type. It is the opposite of the Pick. Whatever key we use with the Omit utility operator removes those keys from the type and returns a new type.

    Example

    In this example, we have omitted the color and id properties from the type1 using the Omit utility type and created the omitObj object. When a user tries to access the color and id properties of omitObj, it will generate an error.

    type type1 ={
       color: string;
       size: number;
       id: string;};let omitObj: Omit<type1,"color"|"id">={
       size:20,};
    console.log(omitObj.size);// This will generate an error// console.log(omitObj.color);// console.log(omitObj.id)

    On compiling, it will generate the following JavaScript code

    var omitObj ={
       size:20};console.log(omitObj.size);// This will generate an error// console.log(omitObj.color);// console.log(omitObj.id)

    Output

    The above code will produce the following output

    20
    

    Readonly Type in TypeScript

    We can use the Readonly utility type to make all types read-only properties, making all properties immutable. So, we cant assign any value to the readonly properties after initializing for the first time.

    Example

    In this example, keyboard_type contains three different properties. We have used the Readonly utility type to make all properties of keyboard objects read-only. The read-only property means we can access it to read values, but we cant modify or reassign them.

    type keyboard_type ={
       keys: number;
       isBackLight: boolean;
       size: number;};let keyboard: Readonly<keyboard_type>={
       keys:70,
       isBackLight:true,
       size:20,};
    console.log("Is there backlight in the keyboard? "+ keyboard.isBackLight);
    console.log("Total keys in the keyboard are "+ keyboard.keys);// keyboard.size = 30 // this is not allowed as all properties of the keyboard are read-only

    On compiling, it will generate the following JavaScript code

    var keyboard ={
       keys:70,
       isBackLight:true,
       size:20};console.log("Is there backlight in the keyboard? "+ keyboard.isBackLight);console.log("Total keys in the keyboard are "+ keyboard.keys);// keyboard.size = 30 // this is not allowed as all properties of the keyboard are read-only

    Output

    The above code will produce the following output

    Is there backlight in the keyboard? true
    Total keys in the keyboard are 70
    

    ReturnType Type in TypeScript

    The ReturnType utility type allows to set type for any variable from the functions return type. For example, if we use any library function and dont know the function’s return type, we can use the ReturnType utility operator.

    Example

    In this example, we have created the func() function, which takes a string as a parameter and returns the same string. We have used the typeof operator to identify the function’s return type in the ReturnType utility operator.

    functionfunc(param1: string): string {return param1;}// The type of the result variable is a stringlet result: ReturnType<typeof func>=func("Hello");
    console.log("The value of the result variable is "+ result);

    On compiling, it will generate the following JavaScript code

    functionfunc(param1){return param1;}// The type of the result variable is a stringvar result =func("Hello");console.log("The value of the result variable is "+ result);

    Output

    The above code will produce the following output

    The value of the result variable is Hello

    Record Type in TypeScript

    The Record utility type creates an object. We need to define the object’s keys using the Record utility type, and it also takes the type and defines the object key with that type of object.

    Example

    In the example below, we have defined the Employee type. After that, to create a new_Employee object, we used Record as a type utility. Users can see that the Record utility creates an Emp1 and Emp2 object of type Employee in the new_Employee object.

    Also, users can see how we have accessed the properties of Emp1 and Emp2 objects of the new_Employee object.

    type Employee ={
       id: string;
       experience: number;
       emp_name: string;};let new_Employee: Record<"Emp1"|"Emp2", Employee>={
       Emp1:{
    
      id:"123243yd",
      experience:4,
      emp_name:"Shubham",},
    Emp2:{
      id:"2434ggfdg",
      experience:2,
      emp_name:"John",},};
    console.log(new_Employee.Emp1.emp_name); console.log(new_Employee.Emp2.emp_name);

    On compiling, it will generate the following JavaScript code

    var new_Employee ={
       Emp1:{
    
      id:"123243yd",
      experience:4,
      emp_name:"Shubham"},
    Emp2:{
      id:"2434ggfdg",
      experience:2,
      emp_name:"John"}};console.log(new_Employee.Emp1.emp_name);console.log(new_Employee.Emp2.emp_name);</code></pre>

    Output

    The above code will produce the following output

    Shubham
    John
    

    NonNullable Type in TypeScript

    The NonNullable utility operator removes the null and undefined values from the property type. It ensures that every variable exists with the defined value in the object.

    Example

    In this example, we have created the var_type, which can also be null or undefined. After that, we used var_type with a NonNullable utility operator, and we can observe that we cant assign null or undefined values to the variable.

    type var_type = number | boolean |null|undefined;let variable2: NonNullable<var_type>=false;let variable3: NonNullable<var_type>=30;
    
    console.log("The value of variable2 is "+ variable2);
    console.log("The value of variable3 is "+ variable3);// The below code will generate an error// let variable4: NonNullable<var_type> = null;

    On compiling, it will generate the following JavaScript code

    var variable2 =false;var variable3 =30;console.log("The value of variable2 is "+ variable2);console.log("The value of variable3 is "+ variable3);// The below code will generate an error// let variable4: NonNullable = null;

    Output

    The above code will produce the following output

    The value of variable2 is false
    The value of variable3 is 30
    
  • Mixins

    TypeScript is an Object-oriented programming language and contains the classes, which is a blueprint for the object. The class can be defined as shown below in TypeScript.

    classMathOps{// defining a methodadd(a:number, b:number):void{console.log('sum is: ', a + b);}}

    Now, suppose we have multiple classes like the above which contain different operations.

    What if you want to reuse both classes and want to create a third class by extending both classes? For example, if you try to extend the ‘allOps’ class with ‘MathOps1’, and ‘BitwiseOps’ classes, TypeScript will give you an error as multiple inheritance is not allowed in TypeScript.

    classallOpsextendsMathOps, BitwiseOps {// Executable code}

    To solve the above problem, developers can use the mixins in TypeScript.

    Introduction to Mixins

    In TypeScript, mixins is a concept that allows us to extend a single class via multiple classes. This way, we can reuse the class components and combine their methods and properties in a single class.

    We can use the declaration merging technique to extend the single class via multiple classes.

    Declaration Merging

    When you have two declarations of the same name, it will be merged without throwing any error.

    For example, in the below code, we have defined the interface ‘A’ twice containing different properties. After that, we have created the ‘obj’ object of type ‘A’, which contains the properties ‘a’ and ‘b’ as both interfaces ‘A’ are merged.

    // Definition of an interface with the same name twiceinterfaceA{
    
    a:string;}interfaceA{
    b:string;}// Object that implements the interfacelet obj:A={
    a:'a',
    b:'b'};console.log(obj.a);// aconsole.log(obj.b);// b</code></pre>

    On compiling, it will generate the following TypeScript code.

    // Object that implements the interfacelet obj ={
    
    a:'a',
    b:'b'};
    console.log(obj.a);// a console.log(obj.b);// b

    Output

    The output of the above example is as follows

    a
    b
    

    Now, let's understand how we can use the declaration merging technique to extend multiple classes with a single class.

    Implementing Our Mixin Helper Function

    Let's understand the below example code line-by-line.

    • We have defined the 'swimmer' class, which contains the StartSwim() and EndSwim() methods.
    • Next, we have defined the Cyclist class, which contains startCycle() and endCycle() methods.
    • Next, the 'combineMixins()' function is a helper function that allows us to mix the properties and methods of two or more classes in one class. That's why it is called mixin function.
      • The function takes the derived or parent class as a first parameter, and the array of base or child classes as a second parameter.
      • It iterates through the array of base classes using the forEach() method.
      • In the forEach() method callback, it iterates through each property of the single base class and adds in the prototype of the derived class using the defineProperty() method.
    • After that, we have defined the 'Biathlete' class.
    • The interface 'Biathlete' extends the 'Swimmer' and 'Cyclist' classes to merge all property and method declarations of both classes into the 'Biathlete' class. However, it won't combine the implementation of methods.
    • Next, we call the 'combineMixins()' function which merges the implementations of methods of the classes in other classes.
    • Next, we created the instance of the 'Biathlete' class and used it to call the methods of the 'Swimmer' and 'Cyclist' classes.
    // Swimmer class definitionclassSwimmer{// MethodsStartSwim(){console.log('Starting the swimming session...');}EndSwim(){console.log('Completed the swimming session.');}}//   Cyclist class definitionclassCyclist{// MethodsStartCycle(){console.log('Starting the cycling session...');}EndCycle(){console.log('Completed the cycling session.');}}// export class Biathlete extends Swimmer, Cyclist{}functioncombineMixins(derived:any, bases:any[]){// Iterate over the base classes
    
    bases.forEach(base =&gt;{// Iterate over the properties of the base classObject.getOwnPropertyNames(base.prototype).forEach(name =&gt;{// Copy the properties of the base class to the derived class
            Object.defineProperty(derived.prototype, name, Object.getOwnPropertyDescriptor(base.prototype, name));});});}// Export Biathlete classexportclassBiathlete{}// Use interface to combine mixinsexportinterfaceBiathleteextendsSwimmer, Cyclist {}// Combine mixinscombineMixins(Biathlete,&#91;Swimmer, Cyclist]);// Create an instance of Biathlete classconst athlete =newBiathlete();// Call the methods
    athlete.StartSwim(); athlete.EndSwim(); athlete.StartCycle(); athlete.EndCycle();

    Output

    Starting the swimming session...
    Completed the swimming session.
    Starting the cycling session...
    Completed the cycling session.
    

    This way, we can merge the structure of two components using the interface. After that, we can use the mixins function to combine the implementations of two components into the third one and reuse them.

  • Iterators and Generators

    In TypeScript, iterators and generators allow to control the iteration over the iterables. Here, iterables are objects like arrays, tuples, etc. through which we can iterate. Using iterators and generators in the code allows us to write efficient and readable code.

    Here, we will discuss how to create custom iterators and generators in TypeScript.

    Iterators

    Iterators are used to traverse through the iterable objects. It is a unique function that returns the iterator object. The iterator object contains the next() method, which again returns the object having below 2 properties.

    • value: The value property contains the value of the next element in the sequence.
    • done: The done property contains the boolean value, representing whether the iterator reaches the end of the sequence or not.

    Let’s look at the below examples of iterators.

    Example: Using the values() Method

    In the code below, we have defined the ‘fruits’ array containing the string. The ‘fruits.values()’ returns an iterator object, which is stored in the ‘iterator’ variable.

    Whenever we call the next() method, it returns the object containing the ‘value’ and ‘done’ properties. You can see all the values of the array. Whenever we call the next() method 6th time, it returns the object having ‘value’ property with an undefined value as iterator reached to the end of the sequence.

    // Defining a fruits arrayconst fruits =['apple','banana','mango','orange','strawberry'];// Defining an iteratorconst iterator = fruits.values();// Getting the first elementconsole.log(iterator.next().value);// apple// Getting the second elementconsole.log(iterator.next().value);// banana// Getting remaining elementsconsole.log(iterator.next().value);// mangoconsole.log(iterator.next().value);// orangeconsole.log(iterator.next().value);// strawberryconsole.log(iterator.next().value);// undefined

    On compiling, it will generate the following JavaScript code.

    // Defining a fruits arrayconst fruits =['apple','banana','mango','orange','strawberry'];// Defining an iteratorconst iterator = fruits.values();// Getting the first element
    console.log(iterator.next().value);// apple// Getting the second element
    console.log(iterator.next().value);// banana// Getting remaining elements
    console.log(iterator.next().value);// mango
    console.log(iterator.next().value);// orange
    console.log(iterator.next().value);// strawberry
    console.log(iterator.next().value);// undefined

    Output

    The output of the above example code is as follow

    apple
    banana
    mango
    orange
    strawberry
    undefined
    

    Example: Creating the Custom Iterator Function

    In the code below, createArrayIterator() function is a custom iterator function.

    We have started with defining the ‘currentIndex’ variable to keep track of the index of the element in the iterable.

    After that, we return the object containing the ‘next’ property from the function. The ‘next’ property contains the method as a value, which returns the object containing the ‘value’ and ‘done’ property. The assigned value into the ‘value’ and ‘done’ properties is based on the current element in the sequence.

    After that, we used the createArrayIterator() function to traverse through the array of numbers.

    // Custom iterator functionfunctioncreateArrayIterator(array:number[]){// Start at the beginning of the arraylet currentIndex =0;// Return an object with a next methodreturn{// next method returns an object with a value and done propertynext:function(){// Return the current element and increment the indexreturn currentIndex < array.length ?{ value: array[currentIndex++], done:false}:{ value:null, done:true};}};}// Create an iterator for an array of numbersconst numbers =[10,20,30];const iterator =createArrayIterator(numbers);console.log(iterator.next().value);// 10console.log(iterator.next().value);// 20console.log(iterator.next().value);// 30console.log(iterator.next().done);// true

    On compiling, it will generate the following JavaScript code.

    // Custom iterator functionfunctioncreateArrayIterator(array){// Start at the beginning of the arraylet currentIndex =0;// Return an object with a next methodreturn{// next method returns an object with a value and done propertynext:function(){// Return the current element and increment the indexreturn currentIndex < array.length ?{ value: array[currentIndex++], done:false}:{ value:null, done:true};}};}// Create an iterator for an array of numbersconst numbers =[10,20,30];const iterator =createArrayIterator(numbers);
    console.log(iterator.next().value);// 10
    console.log(iterator.next().value);// 20
    console.log(iterator.next().value);// 30
    console.log(iterator.next().done);// true

    Output

    The output of the above example code is as follows

    10
    20
    30
    true
    

    Generators

    Generator functions are also similar to the iterators, which return the values one by one rather than returning all values once. When you call the generator function, that returns the generator object which can be used to get values one by one.

    Generator functions are mainly useful when you want to get values one by one rather than getting all values at once and storing them in the memory.

    Syntax

    Users can follow the syntax below to create generator function in TypeScript.

    function*func_name(){yield val;}const gen =numberGenerator();// "Generator { }"console.log(gen.next().value);// {value: val, done: false}
    • In the above syntax, we have used the ‘function*’ to define the generator function.
    • You can use the ‘Yield’ keyword to return values one by one from the generator function.
    • When you call the generator function, it returns the generator object.
    • When you call the next() method, it returns the object containing the ‘value’ and ‘done’ properties same as the iterator.

    Example: Basic Generator Function

    In the code below, the numberGenerator() function is a generator function. We have used the ‘yield’ keyword and returned 10, 20, and 30 values one by one.

    After that, we called the numberGenerator() function which returns the generator object. To get the values, we use the next() method of the generator object.

    // Basic generator functionfunction*numberGenerator(){yield10;yield20;yield30;}// Create a generator objectconst gen =numberGenerator();// Call the generator functionconsole.log(gen.next().value);// 10console.log(gen.next().value);// 20console.log(gen.next().value);// 30console.log(gen.next().done);// true

    On compiling, it will generate the same JavaScript code.

    Output

    The output of the above example code is as follows

    10
    20
    30
    true
    

    Example: Creating the Generator Function to Traverse a Range

    Here, we have defined the range() generator function which takes the starting and ending point of the range as a parameter. In the function, we traverse the range and return the values one by one using the ‘yield’ keyword.

    After that, we used the range() function with the ‘for loop’ to traverse the generator object returned from the range() function. The loop prints each value returned from the range() function.

    // Generators are functions that allow to traverse a rangefunction*range(start:number, end:number){// Loop through the rangefor(let i = start; i <= end; i++){// Yield the current valueyield i;}}// Loop through the rangefor(const num ofrange(1,5)){console.log(num);// 1, 2, 3, 4, 5}

    On compiling, it will generate the following JavaScript code.

    // Generators are functions that allow to traverse a rangefunction*range(start, end){// Loop through the rangefor(let i = start; i <= end; i++){// Yield the current valueyield i;}}// Loop through the rangefor(const num ofrange(1,5)){
    
    console.log(num);// 1, 2, 3, 4, 5}</code></pre>

    Output

    1
    2
    3
    4
    5
    

    Difference Between Iterators and Generators

    Iterators and generators look similar. However, they are different. Here, we have explained some differences between both.

    FeatureIteratorGenerator
    DefinitionAn object that adheres to the Iterator protocol, specifically implementing a next() method.A function that can pause execution and resume, automatically managing the state internally.
    Control MechanismManually controls iteration via the next() method, which returns { value, done }.Uses yield to pause and return values, and next() to resume.
    SyntaxTypically involves creating an object with a next() method.Defined with function* syntax and includes one or more yield statements.
    Usage ComplexityHigher, due to explicit state management and the need for a custom next() implementation.Lower, as state management and iteration control are simplified by yield.
    Ideal Use CasesSuitable for simple, custom iterations where explicit control is required.Better for complex sequences, asynchronous tasks, or when leveraging lazy execution.