tensorflow / lattice

Lattice methods in TensorFlow

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Lattice Kernel Tensors not Monotonic

PoorvaRane opened this issue · comments

Referencing the issue here - #49

What is encoded in the lattice kernel? The lattice kernels for most of my signals look monotonic and it appears as though like tf.cumsum is not needed. However, for one set of signals, the lattice kernel looks like this:

tensor_name:  groupwise_dnn_v2/group_score/another_sample_layer/lattice_kernel
array([[0.00000],
       [0.00000],
       [0.00000],
       [0.63704],
       [0.00000],
       [0.42078],
       [0.99970],
       [1.00000],
       [0.45677],
       [0.59488],
       [1.00000],
       [1.00000],
       [0.66063],
       [0.99998],
       [1.00000],
       [1.00000],
       [0.00000],
       [0.00000],
       [0.00000],
       [0.63867],
       [0.00000],
       [0.42078],
       [0.99982],
       [1.00000],
       [0.45697],
       [0.61441],
       [1.00000],
       [1.00000],
       [0.67717],
       [0.99998],
       [1.00000],
       [1.00000],
       [0.00000],
       [0.00000],
       [0.00000],
       [0.63715],
       [0.00000],
       [0.42114],
       [1.00000],
       [1.00000],
       [0.45767],
       [0.59587],
       [1.00000],
       [1.00000],
       [1.00000],
       [1.00000],
       [1.00000],
       [1.00000],
       [0.00000],
       [0.00000],
       [0.00000],
       [0.63878],
       [0.00000],
       [0.42114],
       [1.00000],
       [1.00000],
       [0.45767],
       [0.61565],
       [1.00000],
       [1.00000],
       [1.00000],
       [1.00000],
       [1.00000],
       [1.00000]], dtype=float32)

My code for the lattice is:

tfl.layers.Lattice(
        lattice_sizes=[2]*len(sample_input),
        monotonicities=['increasing']*len(sample_input),
        output_min=0.0,
        output_max=1.0,
        name='sample_lattice'
      )(sample_calib_layer)

Lattice kernel is flattened output values on each vertex (unlike pwl where the kernel encodes the deltas). The kernel you have here is monotonic. e.g. the deltas across the 3rd dim are non-negative.

kernel = tf.reshape(tf.constant(layer.kernel), [2]*6)
assert tf.reduce_min(kernel[:,:,1:,...] - kernel[:,:,:-1,...]) >= 0

The reason behind this design choice was mentioned briefly in the RFC. TL;DR is that these choices speed up training and evaluation.