thoughtbot / shoulda

Makes tests easy on the fingers and the eyes

Home Page:http://www.thoughtbot.com/community

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Add matcher for validates_with

kyledecot opened this issue · comments

It would be nice to be able to do something like:

it { should_validate_with FoobarValidator }

Even if this was possible, defining tests in this way does not actually test anything. Testing an implementation with the implementation doesn't verify anything. You're better off using a series of should allow_value("xyz").for(:field) or defining your own custom matcher specifically for your custom validator.

Does it not test that the model is using the custom validator? The custom validator itself has it's own tests elsewhere.

For other people finding this, I came across this gist: https://gist.github.com/Bartuz/98abc9301adbc883b510

Does it not test that the model is using the custom validator?

I argue that is not a useful test. I argue that this only makes your test suite more brittle.

Testing that a model is merely using some particular implementation is not the same as testing the behavior that you want the model to actually employ. i.e. You shouldn't have to change your tests if you change the validator's implementation, but want to keep the same behavior.

I see what you're saying.

In my case I created a separate custom validator because the validation
logic was quite complicated. As a separate custom validator I can test it
by giving it a stubbed object, similar to this example on stack overflow:
http://stackoverflow.com/a/17739176/4075554 . Not having to create a full
set of database records for the associated objects speeds took the test
execution time from 13 seconds to 0.3 seconds, which adds up over the
entire suite.

Obviously, I could stub the associated record data on the model under test
(data from the associated models is all access via delegations, so this is
actually quite easy), but it always seems 'wrong' to me stub the object
under test.

So, I have a custom validator that's tested without hitting the database,
and I can test that my models are using that validator, again without
hitting the database.

This struck me as a useful approach, but perhaps I'm wrong! Either way, I'm
happy with the matcher I found in the gist :)

2016-04-28 15:16 GMT+01:00 Ryan McGeary notifications@github.com:

Does it not test that the model is using the custom validator?

I argue that is not a useful test. I argue that this only makes your test
suite more brittle.

Testing that a model is merely using some particular implementation is not
the same as testing the behavior that you want the model to actually
employ. i.e. You shouldn't have to change your tests if you change the
validator's implementation, but want to keep the same behavior.


You are receiving this because you commented.
Reply to this email directly or view it on GitHub
#241 (comment)

I argue that is not a useful test. I argue that this only makes your test suite more brittle.

I'm uncertain as to how testing whether a custom validator is used or not on a given model makes your test suite more brittle. It's no different than using spies to test whether certain classes are instantiated and methods called on them, for example. Isn't the point of a unit test to narrowly test functionality of a given class from a functional perspective? Testing the functionality of a separate class within a given class‘ tests sets a dangerous precedent for poorly organized tests, IMHO.

As long as the custom validation class is being tested elsewhere, this is a useful feature.

This is very much a mockist vs classicist debate. I fall on the classicist side. I currently help maintain dozens, if not hundreds, of projects with test suites, and many I did not originally author. Those that take more of a classicist approach are much easier to maintain and refactor. Those that mock too heavily typically suffer from more technical debt because refactoring is made more difficult. This is not only my anecdotal opinion, but also the behavior and complaints of the mockist teams who originally authored the projects. I.e. The overly mocked projects are more likely to be given up on and let rot or marked as targets for complete rewrites.

I’m not against mocks, stubs, or even spies, but I’ve found test suites are best when they’re used sparingly.

https://www.thoughtworks.com/insights/blog/mockists-are-dead-long-live-classicists

I suppose my issue is that I'd rather test a custom validation class independently of a model - otherwise I need to write the same tests for each applicable model. This will likely create a situation where a test suite requires more management than the approach proposed here.

The classicist approach is one point of view, and my hope for this library is that it would be inclusive of other points of view that present valid cases for the addition of new features.

I appreciate the thoughtful and timely response.

You should unit test that custom validator class, but you should, more importantly, test the behavior of the validation integrated into your model.

Btw, nothing is stopping you from testing using the mockist approach, and this library isn’t necessary to do that. Call valid? and assert that an instance of your custom validator class had its method called. Done.

That's exactly what I'm doing now. I see this as a problem that will repeat itself and thought shoulda should include this matcher, but I suppose this is a shoulda coulda woulda situation 😄