ahmednuaman / grunt-scss-lint

A Grunt task to lint your SCSS

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Is there a limit of files to scan, or a limit of errors to output (screen or xml) ?

danielppereira opened this issue · comments

Is there a limit of files to grunt-scss-lint scan and run the task, or number of erros to output?

If i execute the command outside grunt, with the SCSS-Lint, all .scss are linted. So, there's no limit of files to the tool.

> scss-lint ../../pagamento/webroot/sass/

But running my grunt-scss-lint task:

My grunt scss-lint task:

    options: {
        config: '<%= app.config %>.scss-lint.yml',
        reporterOutput: '<%= app.config %>scsslint/report/scsslint_junit.xml',
        force: true
    },
    files: [{
        src: ['<%= app.src %>webroot/sass/checkout3/**/*.scss']
    }]

it breaks like this:

Running "scsslint:all" (scsslint) task
Running scss-lint on all
>> scss-lint failed with error code: undefined
>> and the following message:Error: stdout maxBuffer exceeded.
Warning: Task "scsslint:all" failed. Use --force to continue.

Aborted due to warnings.

if i change the src file line, getting less files, by being more specific changing the '/**/' to '/core/', the task doesn't break.

files: [{
    src: ['<%= app.src %>webroot/sass/checkout3/core/*.scss']
}]

If the limitation is the quantity of errors outputed in screen, maybe when the 'reporterOutput' option is in use, writing the errors in a xml file, this limit should be ignored.

using:

Ubuntu 12.04.4 LTS
scss-lint (0.26.2)
grunt-scss-lint": "^0.3.2"

Sorry, i tried but i don't have enough knowledge.
Some guys fixed it in their plugin, as you can see here:

re1ro/grunt-bg-shell#4

Sure, I'll have a go now.

All fixed and pushed on v0.3.3

i'm getting this same issue, originally it was due to amount of errors however once i fixed up those errors it seems to NOT validate my files. FYI: I have a total of 30 sass files and the following is the grunt call:

    'scsslint': {
        allFiles: [
            '<%= SOURCE_PATH %>/sass/components/*.scss',
            '<%= SOURCE_PATH %>/sass/modules/*.scss'
        ],
        options: {
            bundleExec: false,
            config: 'source/.scss-lint.yml',
            reporterOutput: null,
            force: false,
            exclude: ['<%= SOURCE_PATH %>/bower_components/**/**/*.scss']
        }
    },

I intentionally added in error on a few files in both directories, and none of them get picked by the linter, however if i use one path it works.

gem: scss-lint v0.30.0
npm: grunt-scss-lint v0.3.3

I had the same problem and I debugged a little bit.
When I add a console.log(err); on line 169 of tasks/lib/scss-lint.js I get the following output:

Running scss-lint on files
{ [Error: Command failed: The command line is too long.
] killed: false, code: 1, signal: null }
>> 145 files are lint free

So it seems that all the joined array of files to lint is to big.
It is very confusing that it is saying that all the files are lint free, while it actually is crashing.

This shouldn't be closed as the bug is still there.

Sad times, anyone want to have a go at fixing it?

I can take a look at this sometime this week. I'd like to sooner, but.. work and all.

I have a massive code base I can check against, so I should be able to reproduce pretty quickly.

So, I see that we have a couple of options off hand:

  1. Close the issue and have the users update their maxBuffer (In my codebase it had to be: 3000 * 1024) to succeed.
    • Not a super great option in terms of usability because we are forcing users to extend their CPU usage.
    • However, it's maintainable and removes us from the heavy lifting!
  2. Create a check where we look for the length of results against the maxBuffer and then break the results down and write the write to the log / XML file
    • Will add a fair amount of code.
    • Could produce more bugs / issues later down the line
    • Could confuse users if they set a buffer and then see a faked "stream" coming through
      • this can be worked around by printing / writing files carefully though.
  3. Take a hint from @jshint and if our results are over a certain length Print them all exit w/o any additional processing
    • this would be a breaking change
    • we would have to figure out something for --force
  4. Look into moving away from Spawn and into Fork
    • This would allow a new instance of the V8 engine and the results come back as a stream that we can do with what we will.

I think any of these options are viable, we just need to pick one.

Thoughts?

Interesting points. I'm going to have a look at some node libraries to see if they can help with this problem. I'm looking to refactor this library and move to a more modular structure (eg how the XML is created).

The more I think about it the more I like the 3rd option the best, personally.

But I am also a huge fan of a rewrite / update.

Just also ran into this issue and I like the third option as well.

@davidjbradshaw You can always change the maxBuffer option to something really high if you want to bypass the issue.

Thanks just set it to 300000000 to get around the issue, project I just
joined has 21,000 errors!!!

On Mon, Mar 23, 2015 at 2:42 PM, Tom Bremer notifications@github.com
wrote:

@davidjbradshaw https://github.com/davidjbradshaw You can always change
the maxBuffer option to something really high if you want to bypass the
issue.


Reply to this email directly or view it on GitHub
#63 (comment)
.

David J. Bradshaw )'( dave@bradshaw.net

As a programmer, this thread makes me sad 😭

So why not make a PR? ;)

Yeah, this is pretty damn stupid honestly.

Problem persists for me. Any solution in the works? I would if I could, but do not have the skills.