fzi-forschungszentrum-informatik / TSInterpret

An Open-Source Library for the interpretability of time series classifiers

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[Q] Why COMTE sometimes give cf_label same as predicted?

JanaSw opened this issue · comments

Hello,

I am doing multivariate time series classification with 2 features.
I am trying to use COMTE explain function.

However I am facing issues where the returned label of the explain function is the same as the predicted one (even though I am passing orig_class=np.argmax(prob_item) and trage_class = np.argmin(prob_item)).

I tried debugging the code of the explain and I found that the deduced "other" has a target same as the orig_class and different than the target passed:

image

is this normal ? is the error from my side or do you think it might be a bug ?
Can you please explain to me if i misunderstood something.

Thank you

Hi @JanaSw,

I just had a look at the code and it seems fine for my test models and data. Did you check the accuracy and precision / recall of your classifier ? Sometimes it can happen that no counterfactual can be found due to missing classification capebilities.

If available, feel free to share a minimal sample of the issue, so that I can replicate the issue.

Hello,

Yes, my model's accuracy is 87%.

Actually I missed an important point in my first question :D which is:
I am doing 54-folds leave one out cross validation and this is hapenning on only 3 folds (getting CF Label same as item's label)

However these three folds had accuracy of 100%, what do you think the problem is ?

Also I just want to clarify something please:

  • I have 2 features but for the majority of the folds (more than 40) I am getting counterfactual for only one of the features (the other 10-15 I got 2 plots). It is written in your paper that COMTE only plots the "changed features" can you please elaborate on this ?

Thanks for your answers

Hi,

Maybe first an explanation on how COMTE works. COMTE Takes the provided dataset and your input instance. It generates a counterfactual based on your input instance by switching an input feature series with a series from the provided data set. The algorithm approximates a solution by minimizing the distance between the original and counterfactual instance and the predicted class.

Now to your question "I have 2 features but for the majority of the folds (more than 40) I am getting counterfactual for only one of the features (the other 10-15 I got 2 plots). It is written in your paper that COMTE only plots the "changed features" can you please elaborate on this ?".

The plot function only plots the feature rows where a change takes place. All other (not plotted) rows are not change and therefore consistent with the original values. The idea is that the plot is not too overwhelming if you have a large multivariate timeseries (e.g., 91 features). (To plot all 91 features if only 2 need to be changed to generate a counterfactual, would be overwhelming and the relevant information hidden in the plot.)

Regarding the issue of original instance and counterfactual having the same class. If this is the case, there are two possible reasons that come to my mind:

  • The ability to generate a counterfactual strongly relies on the background dataset given in the initialization of the algorithm. It can happen that if you provide the original labels instead of the predicted labels there that your background dataset is biased. E.g. your original labels says 1 while your classifier says 0
  • Also it is important that your background dataset contains your desired CF class

Hello,

Thanks alot for the explanation and your answer.

Regarding the last 2 points you mentioned:

  • I am sure that the original labels are the same as the predicted labels.

  • If this is the case this means that no CF was found, but isn't better if COMTE return "No counterfactual was found for this instance" instead of returning an explanation with label same to the original ??

Thank you again for your reactivity

Hi,

Do you also know weather all labels (especially the desired counterfactual label) are represented in the dataset ? Do you get the warning: Due to lack of true postitives for class {c} no kd-tree could be build. ?

You could also check, if increasing the parameter number_distractors helps.

Yes, we thought about adding a warning. Returning a string is unfortunately not an option. Counterfactuals are often evaluated with respect to validity (if the CF is an actual CF CF_Label != original_Label). If you run the explainer for multiple explanations and append the explanations, its a bit inconvenient if there is a string inbetween from an evaluation perspective.

Closed due to inacttivity