Category: Examples

https://cdn3d.iconscout.com/3d/premium/thumb/business-idea-3d-illustration-download-in-png-blend-fbx-gltf-file-formats–creative-innovation-innovative-strategy-team-work-pack-deal-partnership-illustrations-6909668.png?f=webp

  • Transfer Learning:

    • Fine-tuning a Pretrained Model: Use models like VGG16 or ResNet50 pre-trained on ImageNet and fine-tune for a specific task, such as custom image classification.
    pythonCopy codefrom tensorflow.keras.applications import VGG16
    from tensorflow.keras import layers, models
    
    # Load VGG16 without the top layer
    base_model = VGG16(weights='imagenet', include_top=False, input_shape=(150, 150, 3))
    
    # Freeze the base model
    base_model.trainable = False
    
    # Add custom layers
    model = models.Sequential([
    
    base_model,
    layers.Flatten(),
    layers.Dense(256, activation='relu'),
    layers.Dense(1, activation='sigmoid')
    ]) # Compile and train the model model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
  • Time Series Forecasting:

    • Temperature Forecasting: Predicting future temperatures based on past values using LSTMs or GRUs.
    • Stock Price Prediction: Using RNNs or LSTMs to predict stock prices based on historical data.
  • Generative Models:

    • Variational Autoencoders (VAEs): Keras has examples for creating a variational autoencoder that generates new images by learning the latent space of the input data.
    • Generative Adversarial Networks (GANs): A more advanced model that pits two networks against each other: a generator and a discriminator.
    pythonCopy codefrom tensorflow.keras import layers, models, backend as K
    from tensorflow.keras.losses import binary_crossentropy
    
    # Sampling function for VAE
    def sampling(args):
    
    z_mean, z_log_var = args
    epsilon = K.random_normal(shape=(K.shape(z_mean)[0], K.int_shape(z_mean)[1]))
    return z_mean + K.exp(0.5 * z_log_var) * epsilon
    # Build encoder inputs = layers.Input(shape=(28, 28, 1)) x = layers.Conv2D(32, 3, activation="relu", padding="same")(inputs) z_mean = layers.Dense(2, name="z_mean")(x) z_log_var = layers.Dense(2, name="z_log_var")(x) z = layers.Lambda(sampling, output_shape=(2,), name="z")([z_mean, z_log_var]) # Build decoder decoder = models.Sequential([
    layers.InputLayer(input_shape=(2,)),
    layers.Dense(128, activation='relu'),
    layers.Reshape((4, 4, 8)),
    layers.Conv2DTranspose(32, 3, activation='relu'),
    layers.Conv2D(1, 3, activation='sigmoid')
    ]) # Define the VAE vae = models.Model(inputs, decoder(z)) vae.compile(optimizer='adam', loss='mse')
  • Text Classification:

    • IMDB Sentiment Analysis: Using Keras for binary sentiment classification (positive/negative) on the IMDB movie review dataset with recurrent neural networks (RNNs) or LSTMs.
    pythonCopy codefrom tensorflow.keras.datasets import imdb
    from tensorflow.keras.preprocessing import sequence
    from tensorflow.keras import layers, models
    
    # Load IMDB dataset
    max_features = 10000
    (x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
    
    # Pad sequences to ensure uniform input size
    maxlen = 500
    x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
    x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
    
    # Build LSTM model
    model = models.Sequential([
    
    layers.Embedding(max_features, 128, input_length=maxlen),
    layers.LSTM(128, dropout=0.2, recurrent_dropout=0.2),
    layers.Dense(1, activation='sigmoid')
    ]) # Compile and train the model model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) model.fit(x_train, y_train, epochs=3, batch_size=64, validation_data=(x_test, y_test))
  • Image Classification:

    • MNIST Handwritten Digits: Classifying the famous MNIST dataset, which contains images of handwritten digits (0–9). This is a beginner-friendly example that introduces how to use convolutional neural networks (CNNs) for image classification.
    • CIFAR-10 Image Classification: Building a CNN to classify the CIFAR-10 dataset, a set of 60,000 32×32 color images in 10 classes.
    pythonCopy codefrom tensorflow.keras import layers, models
    from tensorflow.keras.datasets import mnist
    
    # Load MNIST dataset
    (train_images, train_labels), (test_images, test_labels) = mnist.load_data()
    
    # Normalize pixel values
    train_images, test_images = train_images / 255.0, test_images / 255.0
    
    # Build CNN model
    model = models.Sequential([
    
    layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)),
    layers.MaxPooling2D((2, 2)),
    layers.Conv2D(64, (3, 3), activation='relu'),
    layers.MaxPooling2D((2, 2)),
    layers.Conv2D(64, (3, 3), activation='relu'),
    layers.Flatten(),
    layers.Dense(64, activation='relu'),
    layers.Dense(10, activation='softmax')
    ]) # Compile and train the model model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.fit(train_images, train_labels, epochs=5, validation_data=(test_images, test_labels))