Authoring
Papers
Parametric Spectral Filters for Fast Converging, Scalable Convolutional Neural Networks
Luke Wood, Eric C. Larson
Primary author of 2021 ICASSP Publication: Parametric Spectral Filters for Fast Converging, Scalable Convolutional Neural Networks. Please see GitHub repo for full information.
Learn more →Scaling Continuous Kernels with Sparse Fourier Domain Learning
Clay Harper, Luke Wood, Eric C. Larson, Peter Gerstoft
We address three key challenges in learning continuous kernel representations: computational efficiency, parameter efficiency, and spectral bias. Continuous kernels have shown significant potential, but their practical adoption is often limited by high computational and memory demands. Additionally, these methods are prone to spectral bias, which impedes their ability to capture high-frequency details. To overcome these limitations, we propose a novel approach that leverages sparse learning in the Fourier domain. Our method enables the efficient scaling of continuous kernels, drastically reduces computational and memory requirements, and mitigates spectral bias by exploiting the Gibbs phenomenon.
Learn more →Efficient Graph-Friendly COCO Metric Computation for Train-Time Model Evaluation
Pre-print written alongside Francois Chollet on a novel algorithm to closely approximate Mean Average Precision within the constraints of the TensorFlow graph. The algorithm used in the publication is used in the KerasCV COCO metric implementation, and can be used to perform train time evaluation with any KerasCV object detection model.
Learn more →Deep Learning Object Detection Approaches to Signal Identification
Pre-print written for ICASSP 2022. We decided not to continue this line of work, but our results and our spectrogram object detection dataset are open source and available on GitHub. If you'd like to finish this work and attempt to get it published, feel free to reach out.
Learn more →Keras.IO Tutorials
The Definitive Guide to Object Detection
Luke Wood
My "definitive guide" to object detection is live on keras.io! This tutorial is a bit more like a textbook chapter than a typical keras.io tutorial, but by the end of it you will have an extremely strong sense of how to tackle object detection problems with deep learning.
Learn more →The Definitive Guide to Image Classification
Luke Wood
My "definitive guide" to image classification is live on keras.io! This tutorial is a bit more like a textbook chapter than a typical keras.io tutorial, but by the end of it you will have an extremely strong sense of how to tackle classification problems with deep learning.
Learn more →Teach StableDiffusion new concepts via Textual Inversion
Learning new visual concepts with KerasCV's StableDiffusion implementation.
Learn more →High-performance image generation using Stable Diffusion in KerasCV
Francois Chollet, Luke Wood, Divam Gupta
Generate new images using KerasCV's StableDiffusion model.
Learn more →A walk through latent space with Stable Diffusion
Ian Stenbit, Francois Chollet, Luke Wood
Explore the latent manifold of Stable Diffusion.
Learn more →Custom Image Augmentations with BaseImageAugmentationLayer
Luke Wood
Use BaseImageAugmentationLayer to implement custom data augmentations.
Learn more →CutMix, MixUp, and RandAugment image augmentation with KerasCV
Luke Wood
Use KerasCV to augment images with CutMix, MixUp, RandAugment, and more.
Learn more →Evaluating and exporting scikit-learn metrics in a Keras callback
Luke Wood
This example shows how to use Keras callbacks to evaluate and export non-TensorFlow based metrics.
Learn more →