drwpow / cobalt-ui

Use W3C Design Token Community Group tokens in CI and code

Home Page:https://cobalt-ui.pages.dev

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Splitting tokens into separate files

mike-engel opened this issue · comments

As the number of tokens increase, it would be helpful to be able to split those up into separate files for maintainability. A rough proposal might look like this:

// tokens.json
{
  "colors": {
    "$include": "./colors.json"
  }
}
// colors.json
{
  "$type": "color",
  "white": "#FFFFFF",
  "purple": {
    "light": {
      "$value": "#F1EAFF"
    },
    "dark": {
      "$value": "#331673"
    }
  }
}

Which for parsing would compile that into

{
  "colors": {
    "$type": "color",
    "white": "#FFFFFF",
    "purple": {
      "light": {
        "$value": "#F1EAFF"
      },
      "dark": {
        "$value": "#331673"
      }
    }
  }
}

Alternatives could be

// a
// probably the simplest in code to directly replace a value, but is ambiguous and maybe hard to detect accurately
{
  "colors": "./colors.json"
}

// b
// doesn't feel great since you're conflating the top-level types for colors
{
  "colors": {
    "$type": "include",
    "$value": "./colors.json"
  }
}

Happy to hear your thoughts!

I think this is a great idea, and I would expect the spec to support it at some point! But splitting up the tokens file is more than an extension; its implementation could make it incompatible with future versions of the spec if they implement a different thing. So in that light, I’m going to close this and suggest that you put your proposal to the spec authors on either discussion, that way this library doesn’t shoot off into that uncanny valley of only implementing part of the spec:

  • Can alias tokens reference tokens in another file? (#166)
  • Define order of operations during parsing of tokens file (#123)

I’d be happy to reopen this if, say, the spec reaches an impasse, or nothing happens, etc. But until there’s a very clear “this will never happen in the spec yet many people feel it’s critical to implement” I’d like to avoid making that fork now.

Thanks @drwpow, that makes sense. design-tokens/community-group#123 seems to have stalled, but I'll hold out hope before I make some homebrew solution to combine files before running them through cobalt 😄

One possible workaround that I’ve seen happen for OpenAPI specs is to make a separate one-off “bundler” that flattens multiple tokens.json schemas into one. And that happens before Cobalt scans it. This can be very crude and rudimentary, and can basically just be a spaghetti script that spits out a single tokens.json file that’s ignored from Git. You could also shortcut it because you know exactly what you do/don’t need.

From my work in bundlers, I do know that dealing with conflicts/overrides/outlining what are valid and invalid subschemas (e.g. can a single hex color be a subschema?) you delve into the realm of differences of opinion quite quickly. And there are so many things people want to do that you don’t even think should be possible let alone a good idea 😄. But anyways, again, all that to say that solving a single example of a multi-file schema is actually pretty easy. But solving that problem holistically for most users, in a standardized way, is complex, and is more community alignment than it is a technical challenge.

So I’ve been tinkering with Tokens Studio for Figma support (#30) and I think that feature isn’t really valuable without this feature. Since Figma/Tokens Studio don’t have animation tokens such as duration and cubicBezier—not to mention some elements of the W3C spec are more sophisticated than what either provide—it doesn’t make complete sense to use Figma/Tokens Studio as the single source of truth. But also, pretending having no automation from Figma whatsoever is also the wrong path. I’d imagine most people would still want to maintain your their W3C tokens.json file, but use it to extend Figma styles/tokens. i.e. you’d need multiple schemas.

While I still want to wait on the W3C spec to handle true imports, I think we could allow for simple flattening of multiple token files into one in a safe, future-compatible way.

In other words, just simply support:

export default {
  tokens: ['_one.json', '_two.json', '_three.json']
}

And simply combine them all into one tokens file. It would simply throw an error on any token name conflicts. There wouldn’t be any importing or external aliasing; you’d simply alias as if it was all one big file (kinda like the early, early days of Grunt/Gulp).

That way if/when external aliases are supported, this won’t conflict, and the two could work together.

This makes sense to me, and is basically a more robust solution than what I came up with. Mine just smashes together everything that matches *.tokens.json in a directory, and doesn't care about conflicts. I think your approach adds some guard rails in that regard, so I'm all for it!

@mike-engel just shipped support for this in the latest version—you can just pass an array in. With the caveat again that tokens will get overridden (no warning for now, but if we want to warn or fail we can)

Awesome, thanks @drwpow. Going to remove our custom solution in favor of this, and will let you know if we run into issues 😄

Actually @drwpow, what would you think about a co build mode or new plugin that just builds a token file? We have a design system monorepo with five packages

  • tokens
  • css
  • sass
  • js
  • tailwind

where css, sass, js, and tailwind import a single token file from the tokens package. With the custom splitting code I wrote before, I had a prepublishOnly script in the tokens package to assemble the tokens into a single file that consumers could import. I'd prefer not to have to add an array of token files to each consuming package, since that's tedious and error-prone.

I could write a custom plugin like I did for tailwind, but this seems pretty useful to anyone that it might be worth an official plugin?

Oh that makes sense. I’d probably call it co bundle, and that’d be a separate command from build. At the moment, plugins intentionally don’t have access to any of your source files, so that’d be a core thing

In a sense, tokens.json isn’t an output from a build command; it’s the source of truth that powers the build command. And source files shouldn’t ever be touched by an automated tool. However, you may be taking once source of truth (or multiple), and then transforming it, and then running another build command on that compiled source of truth. That way there’s integrity that the bundled file produces the exact artifacts you want, and isn’t just some untested thing off to the side.

Seems like an extra step, I know, but I’m taking all this from the Redoc CLI for REST OpenAPI schemas which I can’t say enough good things about. Been following this exact process for OpenAPI schemas (sources of truth are split up, but the CLI bundles them, and other tools import the bundle) and it’s been great.

co bundle makes sense to me, and would be perfect for what we need, which is just a token file compiler 😄

Shipped a co bundle command in 1.4.0!

Works wonderfully, thanks for the quick turnaround @drwpow!