DenseNet Classifier: Detecting Regions of Interest in Synthetic Signals#

This example demonstrates how to use DeepPeak’s DenseNet classifier to identify regions of interest (ROIs) in synthetic 1D signals containing Gaussian peaks.

We will: - Generate a dataset of noisy signals with random Gaussian peaks - Build and train a DenseNet classifier to detect ROIs - Visualize the training process and model predictions

Note

This example is fully reproducible and suitable for Sphinx-Gallery documentation.

Imports and reproducibility#

import numpy as np

from DeepPeak.machine_learning.classifier import DenseNet
from DeepPeak.signals import SignalDatasetGenerator
from DeepPeak import kernel

np.random.seed(42)

Generate synthetic dataset#

NUM_PEAKS = 3
SEQUENCE_LENGTH = 200

kernel = kernel.Lorentzian(
    amplitude=(1, 20),
    position=(0.1, 0.9),
    width=(0.03, 0.05),
)

generator = SignalDatasetGenerator(n_samples=300, sequence_length=SEQUENCE_LENGTH)

dataset = generator.generate(
    kernel=kernel,
    n_peaks=(1, NUM_PEAKS),
    noise_std=0.1,
    categorical_peak_count=False,
    compute_region_of_interest=True,
)

Visualize a few example signals and their regions of interest#

dataset.plot(number_of_samples=3)
Predicted ROI (Sample 0), Predicted ROI (Sample 1), Predicted ROI (Sample 2)
<Figure size 800x900 with 3 Axes>

Build and summarize the DenseNet classifier#

dense_net = DenseNet(
    sequence_length=SEQUENCE_LENGTH,
    filters=(32, 64, 128),
    dilation_rates=(1, 2, 4),
    kernel_size=3,
    optimizer="adam",
    loss="binary_crossentropy",
    metrics=["accuracy"],
)
dense_net.build()
dense_net.summary()
Model: "DenseNetDetector"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓
┃ Layer (type)                    ┃ Output Shape           ┃       Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩
│ input (InputLayer)              │ (None, 200, 1)         │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ conv_0 (Conv1D)                 │ (None, 200, 32)        │           128 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ conv_1 (Conv1D)                 │ (None, 200, 64)        │         6,208 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ conv_2 (Conv1D)                 │ (None, 200, 128)       │        24,704 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ ROI (Conv1D)                    │ (None, 200, 1)         │           129 │
└─────────────────────────────────┴────────────────────────┴───────────────┘
 Total params: 31,169 (121.75 KB)
 Trainable params: 31,169 (121.75 KB)
 Non-trainable params: 0 (0.00 B)

Train the classifier#

history = dense_net.fit(
    dataset.signals,
    dataset.region_of_interest,
    validation_split=0.2,
    epochs=20,
    batch_size=64,
)
Epoch 1/20

1/4 ━━━━━━━━━━━━━━━━━━━━ 2s 896ms/step - accuracy: 0.3691 - loss: 0.7073
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 21ms/step - accuracy: 0.6455 - loss: 0.6764
Epoch 1: val_loss improved from None to 0.62679, saving model to /tmp/wavenet_ckpt_ryynbcb7/best.weights.h5

4/4 ━━━━━━━━━━━━━━━━━━━━ 1s 87ms/step - accuracy: 0.7956 - loss: 0.6597 - val_accuracy: 0.9503 - val_loss: 0.6268
Epoch 2/20

1/4 ━━━━━━━━━━━━━━━━━━━━ 0s 33ms/step - accuracy: 0.9552 - loss: 0.6175
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step - accuracy: 0.9523 - loss: 0.6183
Epoch 2: val_loss improved from 0.62679 to 0.57569, saving model to /tmp/wavenet_ckpt_ryynbcb7/best.weights.h5

4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step - accuracy: 0.9502 - loss: 0.6149 - val_accuracy: 0.9503 - val_loss: 0.5757
Epoch 3/20

1/4 ━━━━━━━━━━━━━━━━━━━━ 0s 33ms/step - accuracy: 0.9500 - loss: 0.5817
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 21ms/step - accuracy: 0.9502 - loss: 0.5747
Epoch 3: val_loss improved from 0.57569 to 0.53118, saving model to /tmp/wavenet_ckpt_ryynbcb7/best.weights.h5

4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step - accuracy: 0.9502 - loss: 0.5681 - val_accuracy: 0.9503 - val_loss: 0.5312
Epoch 4/20

1/4 ━━━━━━━━━━━━━━━━━━━━ 0s 34ms/step - accuracy: 0.9502 - loss: 0.5396
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 21ms/step - accuracy: 0.9502 - loss: 0.5287
Epoch 4: val_loss improved from 0.53118 to 0.47503, saving model to /tmp/wavenet_ckpt_ryynbcb7/best.weights.h5

4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 41ms/step - accuracy: 0.9502 - loss: 0.5193 - val_accuracy: 0.9503 - val_loss: 0.4750
Epoch 5/20

1/4 ━━━━━━━━━━━━━━━━━━━━ 0s 32ms/step - accuracy: 0.9509 - loss: 0.4875
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step - accuracy: 0.9515 - loss: 0.4735
Epoch 5: val_loss improved from 0.47503 to 0.40408, saving model to /tmp/wavenet_ckpt_ryynbcb7/best.weights.h5

4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step - accuracy: 0.9502 - loss: 0.4598 - val_accuracy: 0.9523 - val_loss: 0.4041
Epoch 6/20

1/4 ━━━━━━━━━━━━━━━━━━━━ 0s 33ms/step - accuracy: 0.9526 - loss: 0.4141
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step - accuracy: 0.9520 - loss: 0.3988
Epoch 6: val_loss improved from 0.40408 to 0.32188, saving model to /tmp/wavenet_ckpt_ryynbcb7/best.weights.h5

4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step - accuracy: 0.9532 - loss: 0.3845 - val_accuracy: 0.9578 - val_loss: 0.3219
Epoch 7/20

1/4 ━━━━━━━━━━━━━━━━━━━━ 0s 33ms/step - accuracy: 0.9555 - loss: 0.3281
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step - accuracy: 0.9603 - loss: 0.3143
Epoch 7: val_loss improved from 0.32188 to 0.23581, saving model to /tmp/wavenet_ckpt_ryynbcb7/best.weights.h5

4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 41ms/step - accuracy: 0.9655 - loss: 0.3004 - val_accuracy: 0.9741 - val_loss: 0.2358
Epoch 8/20

1/4 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step - accuracy: 0.9715 - loss: 0.2401
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step - accuracy: 0.9714 - loss: 0.2272
Epoch 8: val_loss improved from 0.23581 to 0.15631, saving model to /tmp/wavenet_ckpt_ryynbcb7/best.weights.h5

4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step - accuracy: 0.9733 - loss: 0.2148 - val_accuracy: 0.9789 - val_loss: 0.1563
Epoch 9/20

1/4 ━━━━━━━━━━━━━━━━━━━━ 0s 33ms/step - accuracy: 0.9730 - loss: 0.1671
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step - accuracy: 0.9748 - loss: 0.1538
Epoch 9: val_loss improved from 0.15631 to 0.10293, saving model to /tmp/wavenet_ckpt_ryynbcb7/best.weights.h5

4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step - accuracy: 0.9758 - loss: 0.1426 - val_accuracy: 0.9790 - val_loss: 0.1029
Epoch 10/20

1/4 ━━━━━━━━━━━━━━━━━━━━ 0s 33ms/step - accuracy: 0.9767 - loss: 0.1079
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step - accuracy: 0.9770 - loss: 0.1018
Epoch 10: val_loss improved from 0.10293 to 0.07600, saving model to /tmp/wavenet_ckpt_ryynbcb7/best.weights.h5

4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 41ms/step - accuracy: 0.9766 - loss: 0.0980 - val_accuracy: 0.9793 - val_loss: 0.0760
Epoch 11/20

1/4 ━━━━━━━━━━━━━━━━━━━━ 0s 34ms/step - accuracy: 0.9795 - loss: 0.0771
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step - accuracy: 0.9780 - loss: 0.0774
Epoch 11: val_loss improved from 0.07600 to 0.06351, saving model to /tmp/wavenet_ckpt_ryynbcb7/best.weights.h5

4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step - accuracy: 0.9769 - loss: 0.0767 - val_accuracy: 0.9800 - val_loss: 0.0635
Epoch 12/20

1/4 ━━━━━━━━━━━━━━━━━━━━ 0s 33ms/step - accuracy: 0.9760 - loss: 0.0735
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 21ms/step - accuracy: 0.9768 - loss: 0.0693
Epoch 12: val_loss improved from 0.06351 to 0.05988, saving model to /tmp/wavenet_ckpt_ryynbcb7/best.weights.h5

4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 41ms/step - accuracy: 0.9771 - loss: 0.0669 - val_accuracy: 0.9791 - val_loss: 0.0599
Epoch 13/20

1/4 ━━━━━━━━━━━━━━━━━━━━ 0s 33ms/step - accuracy: 0.9793 - loss: 0.0569
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step - accuracy: 0.9787 - loss: 0.0601
Epoch 13: val_loss improved from 0.05988 to 0.05379, saving model to /tmp/wavenet_ckpt_ryynbcb7/best.weights.h5

4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step - accuracy: 0.9783 - loss: 0.0613 - val_accuracy: 0.9812 - val_loss: 0.0538
Epoch 14/20

1/4 ━━━━━━━━━━━━━━━━━━━━ 0s 33ms/step - accuracy: 0.9773 - loss: 0.0579
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 22ms/step - accuracy: 0.9786 - loss: 0.0564
Epoch 14: val_loss did not improve from 0.05379

4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step - accuracy: 0.9787 - loss: 0.0582 - val_accuracy: 0.9803 - val_loss: 0.0539
Epoch 15/20

1/4 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step - accuracy: 0.9802 - loss: 0.0530
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 21ms/step - accuracy: 0.9789 - loss: 0.0564
Epoch 15: val_loss improved from 0.05379 to 0.05074, saving model to /tmp/wavenet_ckpt_ryynbcb7/best.weights.h5

4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step - accuracy: 0.9784 - loss: 0.0579 - val_accuracy: 0.9811 - val_loss: 0.0507
Epoch 16/20

1/4 ━━━━━━━━━━━━━━━━━━━━ 0s 33ms/step - accuracy: 0.9798 - loss: 0.0557
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step - accuracy: 0.9787 - loss: 0.0567
Epoch 16: val_loss improved from 0.05074 to 0.04934, saving model to /tmp/wavenet_ckpt_ryynbcb7/best.weights.h5

4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step - accuracy: 0.9781 - loss: 0.0582 - val_accuracy: 0.9823 - val_loss: 0.0493
Epoch 17/20

1/4 ━━━━━━━━━━━━━━━━━━━━ 0s 33ms/step - accuracy: 0.9820 - loss: 0.0490
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step - accuracy: 0.9801 - loss: 0.0552
Epoch 17: val_loss did not improve from 0.04934

4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 34ms/step - accuracy: 0.9790 - loss: 0.0573 - val_accuracy: 0.9792 - val_loss: 0.0551
Epoch 18/20

1/4 ━━━━━━━━━━━━━━━━━━━━ 0s 33ms/step - accuracy: 0.9768 - loss: 0.0584
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step - accuracy: 0.9781 - loss: 0.0575
Epoch 18: val_loss did not improve from 0.04934

4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 34ms/step - accuracy: 0.9791 - loss: 0.0559 - val_accuracy: 0.9800 - val_loss: 0.0529
Epoch 19/20

1/4 ━━━━━━━━━━━━━━━━━━━━ 0s 33ms/step - accuracy: 0.9785 - loss: 0.0598
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step - accuracy: 0.9792 - loss: 0.0563
Epoch 19: val_loss improved from 0.04934 to 0.04735, saving model to /tmp/wavenet_ckpt_ryynbcb7/best.weights.h5

4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 41ms/step - accuracy: 0.9797 - loss: 0.0552 - val_accuracy: 0.9829 - val_loss: 0.0473
Epoch 20/20

1/4 ━━━━━━━━━━━━━━━━━━━━ 0s 33ms/step - accuracy: 0.9803 - loss: 0.0495
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step - accuracy: 0.9801 - loss: 0.0526
Epoch 20: val_loss improved from 0.04735 to 0.04655, saving model to /tmp/wavenet_ckpt_ryynbcb7/best.weights.h5

4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step - accuracy: 0.9803 - loss: 0.0528 - val_accuracy: 0.9833 - val_loss: 0.0465
Restored best model weights from /tmp/wavenet_ckpt_ryynbcb7/best.weights.h5

Plot training history#

dense_net.plot_model_history()
accuracy, loss, val_accuracy, val_loss
<Figure size 800x1200 with 4 Axes>

Predict and visualize on a test signal#

_ = dense_net.plot_prediction(dataset=dataset, number_of_samples=12, number_of_columns=3, threshold=0.1, randomize_signal=True)
Predicted ROI (Sample 2), Predicted ROI (Sample 195), Predicted ROI (Sample 91), Predicted ROI (Sample 198), Predicted ROI (Sample 123), Predicted ROI (Sample 223), Predicted ROI (Sample 101), Predicted ROI (Sample 139), Predicted ROI (Sample 55), Predicted ROI (Sample 70), Predicted ROI (Sample 83), Predicted ROI (Sample 7)

Total running time of the script: (0 minutes 8.108 seconds)

Gallery generated by Sphinx-Gallery