w3c / rdf-canon

RDF Dataset Canonicalization (deliverable of the RCH working group)

Home Page:https://w3c.github.io/rdf-canon/spec/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Normative description of algorithms

gkellogg opened this issue · comments

The algorithms used in the spec are overly prescriptive and do not include normative language.
The spec could describe the theoretical basis for dataset canonicalization and describe behavior using normative statements. The explicit algorithms would then follow as an informative appendix.

I am a bit of the "ain't broken, don't fix it" mood... If the only normative text is the theoretical basis and some higher level definition of the algorithm, we will have to go out of our way to prove that the current algorithmic description is indeed correct. Doable, yes; maybe better, yes; but a pain to do it.

Our current spec is very much in style of the HTML & Co, a.k.a. WHATWG style specifications. I am not overly fond of it, but I may be ready to hold my nose and keep it as it is...

I do think that it would be pretty hard to generate non-algorithmic normative text which could theoretically be used to creates an implementation that would reproduce the required results. It's just that a detailed algorithm bothers me on some level. JSON-LD API faced a similar issue, and we added a normative statement that implementations must behave in the same way as described by the algorithms.

We may need a statement that motivates why we describe it this way.

JSON-LD API faced a similar issue, and we added a normative statement that implementations must behave in the same way as described by the algorithms.

Yes, something like that is obviously good to have. But, to be practical: the test suite should be used as a measure of conformance...

JSON-LD API faced a similar issue, and we added a normative statement that implementations must behave in the same way as described by the algorithms.

+1 for doing the same here.

Also see comment in #111.