tensorflow / lattice

Lattice methods in TensorFlow

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

calibrated dnn model with monotonicity

aanilpala opened this issue · comments

unfortunately only example code that uses calibrated dnn model doesn't have any monotonicity constraint defined for any of the features. Therefore the code ignores the projections that is returned by input_calibration_layer_from_hparams here. I struggle to figure out how to use the projections ops in a scenario where the modeled function is supposed to be partially monotonic at least with regards to one feature. The only relevant documentation that can help me out was the comment on the projection ops here. However, I am quite puzzled by what is meant by

that must be applied at each step (or every so many steps)

I would be glad if someone could clarify how the projection ops should be applied within the net to ensure partial monotonicity.

Thanks in advance.

Note that default TF DNNs cannot support monotonicity, so even if you impose monotonicity in the calibrators, it does not guarantee monotonicity in the final function composed of calibration and DNN. If you still want to impose monotonicity only for the calibrators you can do one of the following:

To applying the projction:

(A) If you are not using tf.learn and estimator api, you can do something like this:

(y, projection_op, _) = tfl.calibration_layer(...)
loss = ...
train_op = ...
for _ in range(1000):
    batch_xs, batch_ys = ...
    # Apply gradient update.
    sess.run(train_op, feed_dict={x: batch_xs, y: batch_ys})
    # Apply projection op (can be done once per batch or less often).
    sess.run(projection_op)

(B) If you are using the estimator API, you can create an estimator by inheriting from tfl.Calibrated. E.g. take tensorflow_lattice/python/estimators/calibrated_linear.py and replace prediction_builder_from_calibrated with a DNN instead of the linear combination.

In the coming days we will be pushing a few updates to the estimator library to make (‌‌‌B) simpler and easier to extend. We will try to add an example with monotonicity constraints with that update.

I wonder if the calibration or lattice layers can be used as the upper layer (closer to output) jointly with the DNN to ensure monotonicity. At the end of this tutorial "Other potential use cases...", it mentioned a few possibilities to integrate calibration or lattice layer with other types of networks. Are there any examples out there with guaranteed monotonicity?

Hi @mmilanifard! Could you please share an example to train a lattice with monotonic constraints?
I tried using projection ops but am facing an error while saving and restoring the model from the checkpoint (similar to Issue #35)