tensorflow / model-analysis

Model analysis tools for TensorFlow

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Can't load metrics inside tensorboard

teodor440 opened this issue · comments

System information

  • Have I written custom code (as opposed to using a stock example script
    provided in TensorFlow Model Analysis)
    : NO
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10
  • TensorFlow Model Analysis installed from (source or binary): binary (Pypi)
  • TensorFlow Model Analysis version (use command below): 0.21.6
  • Python version: 3.7.7
  • Jupyter Notebook version: -
  • Exact command to reproduce: -

Describe the problem

I successfully generated the logs for tfma, but I'm having trouble having them displayed in tensorboard. Basically I can see only the plot of the first metric (which appears on loading the page), but the others appear empty, even the first one that was already plotted when trying to load it below
image
Trying to use tfma inside a notebook displays metrics properly, but using tf serving and a local tensorboard fails to display them

Source code / logs

I suspect there is a problem mapping a script
This is what I see in Firefox console

paper-header-panel is deprecated. Please use app-layout instead! tf-tensorboard.html.js:18569:69
paper-toolbar is deprecated. Please use app-layout instead! tf-tensorboard.html.js:18575:204
Content Security Policy: Ignoring ‘x-frame-options’ because of ‘frame-ancestors’ directive.
Loading failed for the <script> with source “https://www.gstatic.com/charts/loader.js”. plugin_entry.html:1:1
Content Security Policy: The page’s settings blocked the loading of a resource at https://www.gstatic.com/charts/loader.js (“script-src”). vulcanized_tfma.js:1046:317
Content Security Policy: The page’s settings blocked the loading of a resource at https://fonts.googleapis.com/css?family=Roboto:400,300,300italic,400italic,500,500italic,700,700italic (“style-src”). vulcanized_tfma.js:1280:354
Content Security Policy: The page’s settings blocked the loading of a resource at https://fonts.googleapis.com/css?family=Roboto+Mono:400,700 (“style-src”). vulcanized_tfma.js:1280:354
uncaught exception: Object
Source map error: Error: request failed with status 404
Resource URL: http://localhost:6006/data/plugin/fairness_indicators/vulcanized_tfma.js
Source Map URL: web-animations-next-lite.min.js.map

Source map error: Error: request failed with status 404
Resource URL: http://localhost:6006/data/plugin/fairness_indicators/vulcanized_tfma.js
Source Map URL: web-animations-next-lite.min.js.map

I tried also to load the page on chrome with content security policies disabled but the source map error persists

Can you please share the code and steps for us to reproduce this error. Thanks!

The code I used to write metrics is

def evaluate_fairness(estimator, logdir):
    '''
    This will log to tensorboard how well the model performed on the test dataset
    '''
    # First create an eval savedmodel
    # Prepare temp folders to store the eval savedmodel
    tfma_tensorboard_dir = os.path.join(logdir, "tfma")
    export_model_eval_dir = os.path.join(tfma_tensorboard_dir, "model")
    evaluation_result_dir = os.path.join(tfma_tensorboard_dir, "eval")
    # Export the eval savedmodel
    expoted_model_path = tfma.export.export_eval_savedmodel(
        estimator=estimator, export_dir_base=export_model_eval_dir,
        eval_input_receiver_fn=eval_input_receiver_fn)
    # Setting the parameters for evaluation
    thresholds = [0.5, 0.99]
    metrics_callbacks = [
            tfma.post_export_metrics.auc(target_prediction_keys=[_label], labels_key=_label),
            tfma.post_export_metrics.fairness_indicators(thresholds=thresholds, target_prediction_keys=[_label], labels_key=_label),
            tfma.post_export_metrics.confusion_matrix_at_thresholds(target_prediction_keys=[_label], labels_key=_label, thresholds=thresholds),
            tfma.post_export_metrics.example_count(target_prediction_keys=[_label], labels_key=_label),
            tfma.post_export_metrics.auc_plots(target_prediction_keys=[_label], labels_key=_label)]
    # Apparently I have to reload the saved model
    eval_shared_model = tfma.default_eval_shared_model(eval_saved_model_path=expoted_model_path, add_metrics_callbacks=metrics_callbacks, include_default_metrics=False)
    # Actually perform the analysis on the exported model
    eval_result = tfma.run_model_analysis(
        eval_shared_model=eval_shared_model,
        data_location=_test_path,
        output_path=evaluation_result_dir)
    # Writing results to tensorboard
    writer = tf.summary.create_file_writer(tfma_tensorboard_dir)
    with writer.as_default():
        summary_v2.FairnessIndicators(evaluation_result_dir, step=1)
    writer.close()

with my estimator and eval_input_fn

In jupyter notebook I was able to see the metrics with:

path = "C:\\Users\\..."
eval_result = tfma.api.model_eval_lib.load_eval_results([path], mode=tfma.constants.DATA_CENTRIC_MODE)
eval_result = eval_result.get_results()[0]
tfma.view.render_slicing_metrics(eval_result, slicing_spec=tfma.slicer.SingleSliceSpec())
tfma.view.render_plot(eval_result)

image
If there is an issue with the code I think a good clue would be that inside jupyter I am given the option to see post_export_metrics/.../metric while in tensorboard it would show the option for .../metric (and only one valid default opened tab with post_export_metrics/.../metric)

You might check out if the files are ok indeed. This is a link to a folder like I pass to tensorboard (might wanna edit some paths inside the files): https://drive.google.com/open?id=1D5ipFLnqQkOh4P7I69S4aQ0j-tvEBult

I'd wish I could be more specific about reproduction steps, but I'm also confused where the problem might come from

My tf packages:
image

So yeah, after revising my packages I found out it was an installation issue
I'll leave here a compatible set of packages

image