Allow different strategies for duplicate keys in JSON objects
YintongMa opened this issue · comments
Both MySQL JSON parser and Rapidjson uses last-wins strategy for duplicate keys now. (see https://bugs.mysql.com/bug.php?id=86866). Could jsoncons provide an alternative option for different strategies? (last-wins/first-wins)
For example:
mysql> INSERT INTO t1 VALUES ('{"x": 17, "x": "red", "x": [3, 5, 7]}');
mysql> SELECT c1 FROM t1;
+------------------+
| c1 |
+------------------+
| {"x": [3, 5, 7]} |
+------------------+
Rapidjson has the same result. However it's '{"x": 17}' in jsoncons.
I'm going to close this one, since I don't think we're ever going to support both 'first wins' and 'last wins', it makes the implementation more complicated, affecting not only reading from strings and files, but also mapping C++ types like std:multimap
to basic_json
. We chose 'first wins' because that's the behaviour of std::map::insert
, emplace
, and try_emplace
, and seems the most C++'ish.
I haven't however forgotten the other issue you raised (and closed) Allow duplicate keys in JSON. Looking to the future, we're thinking about defining a few macros in the source to allow users to disable features they don’t need - perhaps serialization or UTF-8 validation - to slim down the library footprint. Checking for duplicates could be one of those. In the presence of duplicates, we'd want the basic_json
accessors to return the 'first in', consistent with above, but provide a way to iterate over the others if desired.