jhass / nodeinfo

NodeInfo defines a standardized way to expose metadata about an installation of a distributed social network

Home Page:http://nodeinfo.diaspora.software

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Add property to refuse indexing

kakkokari-gtyih opened this issue · comments

It would be nice to have a property that denies indexing to aggregation services (e.g. Mastodon Server Index), just like <meta name="robots" content="noindex"> in HTML.

Related downstream issue: misskey-dev/misskey#11213

Since Nodeinfo is a common standard that can identify software types, etc., it may be tempting to remove it entirely. By adding such a property, we can reject the supported crawlers while maintaining nodeinfo's advantage.

I see four broad client usage categories:

  • Public statistic aggregators and server lists - publicly listing servers and perhaps keeping a history of their stats
  • Inquiry services - Stuff where you can find out a specific thing about a server you already know (like https://version.diaspora.social/)
  • Private statistic aggregators - People scraping the network for fun and/or (scientific) profit
  • Other services - Hide/show certain features to a local user depending on the capabilities of the a remote server, efficiency optimizations for pushing out content to other servers (don't bother to send this here, server is too old/doesn't support feature)

Can others think of more?

What are we targeting here with this property?

Adding this we should be clear in documentation about the intended use and that it'll require collaboration of the client, so it can never be interpreted as a privacy feature. Exposing certain statistics or not for privacy reasons is within the responsibility of the implementing server software always.

What are we targeting here with this property?

I was primarily thinking of using it for public aggregators (misskey-dev/misskey#11213).

commented

as for the-federation.info, i could see a usecase - so what about an optional robots entry:

...
"robots": {
  "disallow": [],
  "allow": ["*"]
},
...

witch follows the robots.txt convention for agent definition?

that way we stay generic, dont interfere with proposed usages ...

commented

-> #82

I'm not sure I'm a big fan of adopting the robots.txt language and referencing it here actually, it might get a little ambiguous on whether it's meant to restrict NodeInfo clients only or crawlers of any part of the website in general. Also "Web Robots" and "crawling" are not well defined terms in the standard anywhere so far. See the first paragraph of https://github.com/jhass/nodeinfo/blob/main/PROTOCOL.md

Come to think of it we don't really specify anywhere that a client should have a specific identifier and communicate it to the server in a particular way. So we should probably extend the protocol in this regard.

Perhaps more fitting terms that come to my mind would be things like client_policy, allowed_clients, allowed_usages or something along those lines.

If we wanted to avoid clients to have to pick an identifier we could also try to define some broad usage categories for the data and allow servers to pick which ones to use. Of course that wouldn't allow to exclude specific clients or include specific clients only (beyond the server blocking stuff via firewall rules).

The approach I'm using in my crawler which feeds into https://nodes.fediverse.party/ and https://the-federation.info/ is: check robots.txt, check software-specific private/hide_in_statistics properties (for GNU Social, Friendica, Hubzilla, and Red). I wish NodeInfo had standardized a flag which said "don't include me into any data sets, don't count me toward any statistics".

robots.txt (any any fields based on the same idea) seem an untenable solution to me, because it requires the administrator to keep their "disallow" lists up-to-date with all new crawlers.

commented

Known usecases: discovery; indexing; statistics; internal (node use it itselve); ... ?

commented

I would go for opt-out