API reference#
This document provides the API documentation for the DeepPeak package.
- class WaveNet(sequence_length: int, num_filters: int = 64, num_dilation_layers: int = 6, kernel_size: int = 3, optimizer: str | Optimizer = 'adam', loss: str | Loss = 'binary_crossentropy', metrics: Tuple[str | Metric] = 'accuracy', seed: int | None = None)[source]#
Bases:
BaseClassifier
WaveNet-style 1D detector for per-timestep peak classification.
- Parameters:
sequence_length (int) – Length of the input sequences.
num_filters (int) – Number of filters in the convolutional layers.
num_dilation_layers (int) – Number of dilated convolutional layers.
kernel_size (int) – Size of the convolutional kernels.
optimizer (Union[str, tf.keras.optimizers.Optimizer]) – Optimizer for model compilation.
loss (Union[str, tf.keras.losses.Loss]) – Loss function for model training.
metrics (Tuple[Union[str, tf.keras.metrics.Metric]]) – Metrics for model evaluation.
seed (Optional[int]) – Random seed for reproducibility.
Notes
Architecture:
Input projection (1x1) to num_filters channels
Stack of dilated causal Conv1D blocks with exponentially increasing dilation (1, 2, 4, …, 2^(L-1)), residual connections, and skip connections
Aggregated skip path -> ReLU -> 1x1 sigmoid for per-step probability
Notes
Output shape: (batch, sequence_length, 1), sigmoid probabilities
Loss: binary_crossentropy (per time-step)
This class encapsulates build/fit/evaluate/predict and plotting utilities
- class DenseNet(sequence_length: int, filters: Tuple[int, int, int] = (32, 64, 128), dilation_rates: Tuple[int, int, int] = (1, 2, 4), kernel_size: int = 3, optimizer: str | Optimizer = 'adam', loss: str | Loss = 'binary_crossentropy', metrics: Tuple[str | Metric] = 'accuracy', seed: int | None = None)[source]#
Bases:
BaseClassifier
Compact 1D ConvNet for per-timestep peak classification.
- Parameters:
sequence_length (int) – Length of the input sequences.
filters (Tuple[int, int, int]) – Number of filters in each convolutional layer.
dilation_rates (Tuple[int, int, int]) – Dilation rates for each convolutional layer.
kernel_size (int) – Size of the convolutional kernels.
optimizer (Union[str, tf.keras.optimizers.Optimizer]) – Optimizer for model compilation.
loss (Union[str, tf.keras.losses.Loss]) – Loss function for model training.
metrics (Tuple[Union[str, tf.keras.metrics.Metric]]) – Metrics for model evaluation.
seed (Optional[int]) – Random seed for reproducibility.
Notes
Architecture:
Three 1D Conv layers with ReLU activations and exponentially increasing dilation (default: 1, 2, 4), padding=’same’
Final 1x1 Conv with sigmoid -> per-step probability map named ‘ROI’
Output#
ROI: shape (batch, sequence_length, 1) with probabilities in [0, 1]
- class Autoencoder(sequence_length: int, dropout_rate: float = 0.3, filters: Tuple[int, int, int] = (32, 64, 128), kernel_size: int = 3, pool_size: int = 2, upsample_size: int = 2, optimizer: str | Optimizer = 'adam', loss: str | Loss = 'binary_crossentropy', metrics: Tuple[str | Metric] = 'accuracy', seed: int | None = None)[source]#
Bases:
BaseClassifier
1D convolutional autoencoder for predicting an ROI (Region Of Interest) mask.
- Parameters:
sequence_length (int) – Length of the input sequences.
dropout_rate (float) – Dropout rate for regularization.
filters (Tuple[int, int, int]) – Number of filters in each convolutional layer.
kernel_size (int) – Size of the convolutional kernels.
pool_size (int) – Pooling size for downsampling.
upsample_size (int) – Upsampling size for the decoder.
optimizer (Union[str, tf.keras.optimizers.Optimizer]) – Optimizer for model compilation.
loss (Union[str, tf.keras.losses.Loss]) – Loss function for model training.
metrics (Tuple[Union[str, tf.keras.metrics.Metric]]) – Metrics for model evaluation.
seed (Optional[int]) – Random seed for reproducibility.
Notes
Architecture:
- Encoder:
Conv1D(f[0], K, relu, same) -> Dropout(p) -> MaxPool1D(2)
Conv1D(f[1], K, relu, same) -> Dropout(p) -> MaxPool1D(2)
- Bottleneck:
Conv1D(f[2], K, relu, same) -> Dropout(p)
- Decoder:
UpSampling1D(2) -> Conv1D(f[1], K, relu, same)
UpSampling1D(2) -> Conv1D(f[0], K, relu, same)
- Output:
Conv1D(1, 1, sigmoid, name=”ROI”)
Output shape#
(batch, sequence_length, 1)
Notes
Loss: binary_crossentropy on the ‘ROI’ head
Metrics: configurable (default: accuracy)
The pooling/upsampling ladder assumes sequence_length divisible by 4 to reconstruct the original length exactly (with padding=’same’ this typically works well; validate with a quick model.summary()).