DenseNet Classifier: Detecting Regions of Interest in Synthetic Signals#

This example demonstrates how to use DeepPeak’s DenseNet classifier to identify regions of interest (ROIs) in synthetic 1D signals containing Gaussian peaks.

We will: - Generate a dataset of noisy signals with random Gaussian peaks - Build and train a DenseNet classifier to detect ROIs - Visualize the training process and model predictions

Note

This example is fully reproducible and suitable for Sphinx-Gallery documentation.

Imports and reproducibility#

import numpy as np

from DeepPeak.machine_learning.classifier import Autoencoder, BinaryIoU
from DeepPeak.signals import Kernel, SignalDatasetGenerator

np.random.seed(42)

Generate synthetic dataset#

NUM_PEAKS = 3
SEQUENCE_LENGTH = 200

generator = SignalDatasetGenerator(n_samples=100, sequence_length=SEQUENCE_LENGTH)

dataset = generator.generate(
    signal_type=Kernel.GAUSSIAN,
    n_peaks=(1, NUM_PEAKS),
    amplitude=(1, 20),
    position=(0.1, 0.9),
    width=(0.03, 0.05),
    noise_std=0.1,
    categorical_peak_count=False,
    compute_region_of_interest=True,
)

Visualize a few example signals and their regions of interest#

dataset.plot(number_of_samples=3)
classifier autoencoder

Build and summarize the WaveNet classifier#

dense_net = Autoencoder(
    sequence_length=SEQUENCE_LENGTH,
    dropout_rate=0.30,
    filters=(32, 64, 128),
    kernel_size=3,
    pool_size=2,
    upsample_size=2,
    optimizer="adam",
    loss="binary_crossentropy",
    metrics=[BinaryIoU(threshold=0.5)],
)
dense_net.build()
dense_net.summary()
Model: "AutoencoderROILocator"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓
┃ Layer (type)                    ┃ Output Shape           ┃       Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩
│ input (InputLayer)              │ (None, 200, 1)         │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ enc_conv0 (Conv1D)              │ (None, 200, 32)        │           128 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ enc_drop0 (Dropout)             │ (None, 200, 32)        │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ enc_pool0 (MaxPooling1D)        │ (None, 100, 32)        │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ enc_conv1 (Conv1D)              │ (None, 100, 64)        │         6,208 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ enc_drop1 (Dropout)             │ (None, 100, 64)        │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ enc_pool1 (MaxPooling1D)        │ (None, 50, 64)         │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ bottleneck_conv (Conv1D)        │ (None, 50, 128)        │        24,704 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ bottleneck_drop (Dropout)       │ (None, 50, 128)        │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dec_up0 (UpSampling1D)          │ (None, 100, 128)       │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dec_conv0 (Conv1D)              │ (None, 100, 64)        │        24,640 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dec_up1 (UpSampling1D)          │ (None, 200, 64)        │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dec_conv1 (Conv1D)              │ (None, 200, 32)        │         6,176 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ ROI (Conv1D)                    │ (None, 200, 1)         │            33 │
└─────────────────────────────────┴────────────────────────┴───────────────┘
 Total params: 61,889 (241.75 KB)
 Trainable params: 61,889 (241.75 KB)
 Non-trainable params: 0 (0.00 B)

Train the classifier#

history = dense_net.fit(
    dataset.signals,
    dataset.region_of_interest,
    validation_split=0.2,
    epochs=4,
    batch_size=64,
)
Epoch 1/4

1/2 ━━━━━━━━━━━━━━━━━━━━ 1s 2s/step - BinaryIoU: 0.0512 - loss: 0.7303
2/2 ━━━━━━━━━━━━━━━━━━━━ 2s 323ms/step - BinaryIoU: 0.0501 - loss: 0.7184 - val_BinaryIoU: 0.0000e+00 - val_loss: 0.6397
Epoch 2/4

1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 41ms/step - BinaryIoU: 0.0000e+00 - loss: 0.6532
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 61ms/step - BinaryIoU: 0.0000e+00 - loss: 0.6522 - val_BinaryIoU: 0.0000e+00 - val_loss: 0.6184
Epoch 3/4

1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step - BinaryIoU: 0.0000e+00 - loss: 0.6104
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 61ms/step - BinaryIoU: 0.0000e+00 - loss: 0.6083 - val_BinaryIoU: 0.0000e+00 - val_loss: 0.6032
Epoch 4/4

1/2 ━━━━━━━━━━━━━━━━━━━━ 0s 39ms/step - BinaryIoU: 0.0107 - loss: 0.5704
2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 60ms/step - BinaryIoU: 0.0111 - loss: 0.5631 - val_BinaryIoU: 0.3814 - val_loss: 0.5704

Plot training history#

dense_net.plot_model_history(filter_pattern="BinaryIoU")
BinaryIoU

Predict and visualize on a test signal#

dense_net.plot_prediction(signal=dataset.signals[0:1, :], threshold=0.4)
Predicted Region of Interest
<Figure size 1200x500 with 1 Axes>

Total running time of the script: (0 minutes 6.184 seconds)

Gallery generated by Sphinx-Gallery