corona-warn-app / cwa-wishlist

Central repository to collect community feature requests and improvements. The CWA development ends on May 31, 2023. You still can warn other users until April 30, 2023. More information:

Home Page:https://coronawarn.app/en/faq/#ramp_down

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Discussion/Question: Is it correct to assume that ca. 99% of all issued warnings are false-positives?

Ein-Tim opened this issue · comments

commented

Your Question

The Wikipedia article on the Corona-Warn-App includes a section called "Fehlalarme" which states the following:

Auswertungen haben ergeben, dass jedes Mal, wenn ein positives Testergebnis geteilt wird, bei vielen Personen eine rote Warnung der CWA ausgelöst wird. So wurden beispielsweise im März/April 2021 jeweils ca. 6 andere Nutzende gewarnt.[83] Wenn zu dieser Zeit die CWA von allen, und nicht nur von geschätzten 35 % der Bevölkerung genutzt worden wäre, hätten im Mittel sogar 17 Personen eine Rot-Warnung bekommen. Bei 16 von denen wäre es ein Fehlalarm gewesen und nur in einem Fall wäre zu Recht gewarnt worden, denn jeder Infizierte steckt im Durchschnitt bei einem R-Wert von ungefähr 1, so wie damals, nur eine weitere Person an. Schätzungsweise müsste sich demnach 1 von 17 rot-Gewarnten (5,9 %) beim Kontakt angesteckt haben. Tatsächlich haben Benutzerbefragungen ergeben, dass in Folge von 13.493 roten Warnungen 792 (6 %) positiv auf SARS-CoV-2 getestet wurden.[83]

Im Februar 2022 wurden jedes Mal im Schnitt 23 Personen alarmiert, wenn ein positives Testergebnis geteilt wurde.[84] Unter der Annahme, dass die CWA von 35 % der Bevölkerung genutzt wurde, lässt sich wie oben abschätzen, dass von 65 Personen, die mit einer roten Warnung alarmiert wurden, sich nur eine infiziert hat (1,5 %) und es bei 64 (98,5 %) ein Fehlalarm war.

Now my question is:

  • Do you agree with this conclusion? If yes/no, why?
  • What do you think could be done to improve the CWA so that less of the warnings are false positives.

Feel free to discuss in this thread.

@Ein-Tim thanks for bringing this to our attention, the chapter on Fehlalarme seems to be relatively new.

Give us some time to check and reconstruct the math behind this and its conclusions before we answer your questions.

commented

@mlenkeit Yes, it's quite new.

Thanks for checking!

The conclusion you should draw from a red card is not "I have covid". It is rather "I have been exposed to covid and should get tested". The expectation that the article conveys is that the app should (somehow) know if the user was actually infected, which is obviously unrealistic.

The use of the term "false alarm" (Fehlalarm) is very misleading; red cards are to be considered "false alarms" if and only if there has not been an exposure to an infected person. No "false alarm" is present if there has been an exposure but the app user has not been infected, per the intent of the alarm (as explained above).

Similarly, it is not the fault of the app if a user has been infected but there was no infected contact that decided to Warn Others. By an adjacent faulty logic, you could argue that a large portion of infections have been "missed" by the app.

Personally I would rather highlight the positive insight from the science blog articles that the rate of users with red card who were infected was very comparable to the rate seen in manual contact tracing.

(I don't know who EugeneV68579605 is, but if they are reading the GitHub, thanks for bringing up the topic on Wikipedia-DE's Discuss page again.)

Yes, interesting - EugeneV...(also sometimes active on Twitter) and fynngodau run a very similar reasoning - and none of the comments was answered (yet) by the originator Mtag, assuming Mtag in Wikipedia is @Mtagxx here in GitHub.

@Ein-Tim thanks for your patience! We have discussed this with the authors of the science blog from RKI. We don’t share the conclusion of a 98,5% “false alarm” rate and we’d like to explain why:

Our aspiration for the Corona-Warn-App is to notify users about exposures where the user was exposed to a person carrying the SARS-CoV-2 virus and thus was at risk of being at the receiving end of a SARS-CoV-2 transmission. Just like a risk in any other context, there is a certain probability that it materializes or not. In the context of CWA, the risk (i.e. probability) of transmission heavily depends on external factors that are beyond the control of the app, e.g., whether the user who receives/issued the warning was wearing a mask or whether the encounter was indoor or outdoor. Individual disposition and factors such as the degree of protection by previous infections and vaccinations also play a role, but are also not considered by the algorithm.

CWA does not claim that a “red risk” card means the user got infected with SARS-CoV-2.

We agree with @fynngodau that a “false alarm” would only be present if the user received an exposure notification although he/she was not exposed to anyone who could have transmitted SARS-CoV-2. Such a “false alarm” can for example happen if someone fraudulently obtains a teleTAN via the hotline or if a user accidentally receives a positive test result instead of a negative one due to a mistake by the lab or POC (“Teststelle”). We don’t have any indicators that such cases happen at a scale that would significantly impact the data that we collect with Privacy-Preserving Analytics (PPA).

We don’t consider a “false alarm” to be present if the user receives an exposure notification but a subsequent test does not confirm an infection. In such cases, the risk simply did not materialize.

This approach can be compared to what public health authorities (PHA, “Gesundheitsämter”) used to do with manual contact tracing:

  • The PHA receives a list of people who may have been exposed to a confirmed-positive person (e.g. because they visited the same venues)
    In CWA, this is similar to collecting bluetooth beacons (i.e. RPIs).
  • The PHA filters the list based on time, depending on the layout of the venue, proximity, etc.
    In CWA, this is similar to filtering the bluetooth beacons by attenuation.
  • The PHA contacts the remaining people on the list and informs them about the exposure (i.e. risk of transmission).
    In CWA, this is similar to issuing an exposure notification (i.e. “red risk” card)

It doesn’t mean that all the people who are informed by the PHA will test positiv (according to anecdotal evidence, this is 5 to 10% of them). As described in Science Blog 1, the accuracy of exposure notifications in CWA is similar with 6% of users who receive a “red risk” card subsequently testing positive. A key advantage of CWA compared to PHAs is that exposure notifications are a lot faster and that it scales to situations that realistically cannot be covered by manual contact tracing or where the contact persons remain unidentified (e.g. in public transport).

Given all this, we think that the accuracy of the warnings is good. A further improvement of accuracy might be achievable with different technology than the bluetooth-based ENF.

Please note that the association between receiving a “red risk” card and subsequently testing positive is also described in Science Blog 4.

commented

@mlenkeit

Thank you very, very much for your comment! It's really great how you work with the community! Big thanks!

I share the assessment you & the RKI team made and will relay this to the Wikipedia discussion thread.

My question here has been answered in detail, thus I'm closing this issue now.

Thank you very, very much for your comment! It's really great how you work with the community! Big thanks!

@Ein-Tim thanks, I'll pass it on to the team 😉

commented

@mlenkeit

Thanks & for sure also a big thanks to the team!