johannfaouzi / pyts

A Python package for time series classification

Home Page:https://pyts.readthedocs.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Bin Edges Are all zero - Value Error

wwjd1234 opened this issue · comments

Description

I have data in the same format used for load_basic_motions(return_X_y=True)
But when I created my data set I had to pad some time series with zeros
This made the length all the same and I ended up with a ndarray fo X_train shape of (177,12,111) and y_train hape of 177

When I run clf.fit I get a ValueError that says the following:

 At least two consecutive quantiles are equal. Consider trying with a smaller number of 
 bins or removing timestamps with low variation.

the Bin edges seem to all be zero

Is this because of the padding?

Steps/Code to Reproduce

X_train, X_test, y_train, y_test = train_test_split(x, y, test_size = 0.5, random_state = 42)
    transformer = WEASELMUSE()
    logistic = LogisticRegression(random_state=1, max_iter=10000,solver='liblinear', multi_class='ovr')
    clf = make_pipeline(transformer, logistic)
    clf.fit(X_train, y_train)

Versions

NumPy 1.20.3
SciPy 1.6.3
Scikit-Learn 0.24.2
Numba 0.53.1
Pyts 0.11.0

@wwjd1234

I ran into the problem of "constants" as well when I first started using this library. This is to do with the self checks where the library evaluates consecutive constants as an error. The issue can be resolved by reducing the number of bins, but that is not practical in most cases.

The solution is to edit the discretizer.py in "site-packages\pyts\preprocessing" under the library location. Find the following line:

self._check_constant(X)

and comment it out. That should resolve your problem.

Hi I tried commenting out the line of code you mentioned but I still get the error. It is a ValueError from the following function:

    def _compute_bins(self, X, y, n_timestamps, n_bins, strategy):
        if strategy == 'normal':
            bins_edges = norm.ppf(np.linspace(0, 1, self.n_bins + 1)[1:-1])
        elif strategy == 'uniform':
            timestamp_min, timestamp_max = np.min(X, axis=0), np.max(X, axis=0)
            bins_edges = _uniform_bins(timestamp_min, timestamp_max,
                                       n_timestamps, n_bins)
        elif strategy == 'quantile':
            bins_edges = np.percentile(
                X, np.linspace(0, 100, self.n_bins + 1)[1:-1], axis=0
            ).T
            if np.any(np.diff(bins_edges, axis=0) == 0):
                raise ValueError(
                    "At least two consecutive quantiles are equal. "
                    "Consider trying with a smaller number of bins or "
                    "removing timestamps with low variation."
                )
        else:
            bins_edges = self._entropy_bins(X, y, n_timestamps, n_bins)
        return bins_edges

Something interesting I found out is that when I change the strategy to uniform it seems to work.... I didn't need to commented out the line you suggested above. Also, all strategies work except the quantile which throws the error....

transformer = WEASELMUSE(strategy='uniform',word_size=4, window_sizes=np.arange(5, 105))

Hi,

First of all, I would like to mention that variable length time series are unfortunately badly supported in this library for the moment. The reasons for this are twofold: (i) algorithms introduced in their original papers were rarely meant to deal with variable length time series (because of the lack of such data sets in the UCR Time Series Classification repository) and I wanted to implement the algorithms as they were described in the papers, and (ii) it's obviously easier and more efficient to work with fixed length time series using NumPy arrays. Therefore, padding shorter time series with a fixed value is likely to introduce some issues.

More specifically on the WEASEL algorithm, the first steps are:

  1. Extracting non-overlapping subsequences of each time series.
  2. Applying the SymbolicFourierApproximation algorithm on these subsequences, which consists of two steps:
    i. Extracting some discrete Fourier coefficients for these subsequences.
    ii. Discretizing (i.e. binning) these Fourier coefficients.

An error is raised when two back-to-back bin edges are equal, because in this case a bin is empty ([a, a) is an empty interval for any real number a). It would be possible to simply remove this bin, but in this case the number of bins would be smaller for this feature, which would be an issue.

Now let's have a look at the different strategies to compute the bin widths:

  • 'uniform': All bins in each sample have identical widths
  • 'quantile': All bins in each sample have the same number of points
  • 'normal': Bin edges are quantiles from a standard normal distribution
  • 'entropy': Bin edges are computed using information gain

With strategy='uniform', you're unlikely to face this issue, because it only uses the minimum and maximum values to compute the extreme bin edges, and the intermediate bin edges are computed using simply a linear interpolation (the only possibility would be to have a constant feature). With strategy='normal', you will never face this issue because the bin edges are drawn from the quantiles of the standard normal distribution. With strategy='entropy', you may face this issue. Finally, with strategy='quantile' is the strategy for which you're the most likely to face this issue, because it uses the quantiles of the feature to compute the bin edges. If you have a value that is very common in your feature, then it's possible that two back-to-back bin edges will be equal. A natural solution in that case is to decrease the number of bins (or to change the way the bin edges are computed).

Going back to your use case, this is where zero padding becomes an issue. You may have some subsequences that contain only zeros. All the Fourier coefficients will also be equal to zero. And if you have many subsequences that contain only zeros, then you will have a feature (i.e. a Fourier coefficient) that will have many zeros, and the aforementioned issue will occur.

Hope this helps you a bit to understand the reasoning of what's going on under the hood.

Thanks for the clear explanation @johannfaouzi. So the fix seems to be that we lower the number of bins with strategy='quantile' method.

It will be great to see the strategies working with variable length time-series data, as these occur in majority of the use-cases where time-series data is involved.