All convolutions in a very dense block are ReLU-activated and use batch normalization. Channel-sensible concatenation is barely achievable if the height and width dimensions of the data keep on being unchanged, so convolutions in a dense block are all of stride one. Pooling layers are inserted between dense blocks https://financefeeds.com/best-performing-cryptos-of-2025-blockdag-shiba-inu-stellar-cardano-could-turn-heads/