Every EU member state is rushing to implement Digital Green Certificates until the end of June, yet no one is stopping to look at their security.
Digital Green Certificates are a European solution to the problem of free movement in the times of the COVID pandemic. The idea is that while traveling to some other country in the EU, you won’t have to mess about with the random paper confirmation of vaccination you got at the vaccination place but instead will be able to present a standardized and interoperable vaccination or test certificate that the authorities of each member state will validate. The need for interoperability is significant, and thanks to the European Commission the opportunity to standardize on one format was used. In this post, I look at the design of Digital Green Certificates from a security perspective and outline several security issues.
Digital Green Certificates#
The Digital Green Certificate (DGC) is digital proof that a person has been vaccinated against COVID-19, received a negative test result, or recovered from COVID-19. It is valid in all EU countries, and even though it contains the word digital in its name, it can be in the form of a paper or a digital certificate1. In the end it is just a QR code that has the vaccination/test/recovery data in it, signed by an issuing body from some member state. This is supported by public key infrastructure similar to the one used in e-passports that will be centrally operated by the EU (DIGIT). The QR code contains data such as the name of the holder, date of birth, type of the certificate, and respective certificate data (e.g., date of vaccination, vaccine name). Contrary to many claims in the media - one even from the Slovak government agency implementing the DGC apps2 - the data on the QR code can be read by anyone and its confidentiality is not protected.
The technical specifications can be found here and are published by the eHealth network which is an EU body with members from all member states and Norway. The main technical specifications discussed in this post are the Technical Specifications for Digital Green Certificates:
- Volume 1: overview, formats and trust management v1.0.5
- Volume 2: European Digital Green Certificate Gateway v1.3
- Volume 3: Interoperable 2D Code v1.3
- Volume 4: European Digital Green Certificate Applications v1.3
- Volume 5: Public Key Certificate Governance v1.02
The European Commission has tasked Deutsche Telekom and SAP to develop reference implementations of all of the required components, which can be found under the eu-digital-green-certificates Github organization, take particular note of the dgc-overview repository. Some additional specification details and components are also under the ehn-dcc-development Github organization, specifically the hcert-spec repository. There is also a Slack space provided by the Linux Foundation Public Health initiative that you can join here.
In the typical usage scenario, a person gets vaccinated and receives a paper or an email document containing the DGC (a QR code). This QR code can be scanned into a DGC wallet app which stores the person’s certificates. If the person then decides to travel to some EU state they can present the DGC using the wallet app or on paper to a border officer who will use a verifier app to verify that the DGC is valid and verify their identity against the data in the DGC. Both the wallet app and the verifier app communicate with a national backend and are developed by each member state.
The TAN factor#
Many aspects of the DGC architecture are well specified and required from implementors. However the member states are free to implement their parts of the infrastructure (wallet app, verifier app, issuer app, national backend) as they see fit with only some requirements on interoperability. Volume 4 of the specifications, which concerns the wallet and verifier apps, is not a normative specification but rather a description of the reference implementation and the closest guide one can get as to how the nationally developed apps are going to look like.
The wallet app specification describes how a user imports a DGC into the app. In this description, a TAN (Transaction Authentication Number) can be first seen. This TAN is described as a second factor that is generated with the DGC when it is issued and then sent to the user. During import of the DGC into the wallet app, the user has to scan the DGC and enter the correct TAN which is sent to the backend where it is validated against the TAN stored with the ID of the DGC. The backend allows the import only if the DGC hasn’t been previously imported and if the TAN is correct. If several incorrect TAN guesses are sent to the backend for a given DGC ID that ID is blocked and the DGC cannot be imported into the wallet app. During this import process, the app also generates a keypair (of an unspecified type) and sends the generated public key to the backend, where it is stored if the import was successful.
That’s enough for the description of how the app works, let’s now get to where the security issues are. To start off, the TAN is not really a second factor. The specification explicitly allows the TAN to be sent to the user alongside the DGC (page 15 of Volume 4):
Second, the TAN could be returned in conjunction with the signed DGC (as shown in the flow diagram below) or sent directly to the user’s phone.
See the issue? If the TAN is just sent alongside the DGC it cannot be a second factor. Why is there a need for a second factor here? If I somehow obtain a user’s DGC I will just print it on paper and not mess with an app that will not let me import the DGC, or if I want to make the extra effort, I will just make a clone of the app which has no such TAN check on import, but is otherwise identical. Now you might be thinking, okay, but there’s a keypair generated on import in the legitimate app and the public key gets sent to the backend so surely having the DGC in a rogue app will not help you because the verifier app checks with the backend that the DGC was imported and somehow challenges the wallet app to prove possession of the corresponding private key. Well, think again, the generated keypair is not mentioned in the rest of the document and thus it doesn’t matter that the rogue app doesn’t have the private key. In fact, it doesn’t even matter that there is some TAN check as the original paper DGC is still valid and could be used just like the digital one.
Thus, the whole TAN check and public key binding on the backend is a completely useless security measure and will always be useless as long as paper and in-app DGCs have to be treated the same. As one cannot discriminate against people with paper DGCs any attempts at better security properties than those of the paper DGCs are pointless. The inclusion of security measures which do not provide additional security is a red flag when looking at any system, as it usually means that the designers did not know why they added the security measure.
In fact, the introduction of this TAN check and public key binding nonsense introduced properties that make the app less usable and thus the system as a whole less secure. As designed, the DGC is importable only into one wallet app on one device and this can be done only once. See the issue again? This already disallows the scenario where parents traveling with children would both like to have their children’s certificates in their wallet apps or similar multi-user scenarios. What happens if the user uninstalls the app or loses the device that they imported their DGCs into? If one DGC is allowed to reside only in one (official) wallet app, then the wallet app provides worse usability and user experience than just keeping the DGC in the form of a document or image file. Thus rendering any potential security properties gained from users using the app pointless and driving the users away from the app and potentially towards malicious apps. It is also an undocumented limitation of the system that the reference wallet app does not warn the user about. The added complexity of dealing with DGC recovery after a lost device, which is currently unhandled, is another unnecessary burden introduced with the addition of the innocuous TAN check and public key binding.
As the final cherry on the cake of bad security properties the validation of the TAN on the backend during DGC import in the wallet app creates a possibility of a DoS attack against DGCs that were not yet imported and where the attacker knows or can guess the ID of the DGC. The attack is simple, the attacker simply sends a few requests with the wrong TAN as if they were importing the target DGC in the app (the request doesn’t contain the whole DGC but just its ID, the TAN and the generated public key) and after several tries the backend blocks the DGC from being imported into the official app. As the specification does not require the DGC IDs to be unpredictable or unknown to the attacker this attack is clearly feasible and nothing stops a member state from issuing DGCs with IDs that simply increment. This missing requirement with important impact on security is a clear example of the overall state of the DGC specifications with regards to security.
I was not the first to spot and point out some of the issues presented here, the Github user jquade posted this issue on the dgc-overview repository on May 2. The issue outlines the pointlessness of the TAN check. It was closed on May 6 with a comment that did not refute any of its points and gave a handwaving argument for the public key binding mentioning some future online scenarios.
Security analysis not included#
The DGC specification does not even have a clear threat model which would describe what sorts of attackers it aims at. Even questions such as: What security properties does the system claim to have? Is it supposed to stop theft of DGCs, counterfeiting of DGCs, impersonation, …? are left unanswered. The only part of the specification explicitly focusing on security is found in Volume 1 on pages 8 and 9. It considers such risks as the signing algorithm (ECDSA) being found weak!? but disregards many important risks with regards to the apps by claiming that:
These cannot preemptively be accounted for in this specification but must be identified, analyzed and monitored by the Participants.
How incredibly helpful to find this in place of a proper security design and analysis 🤦.
This post presented several issues with the current specification of the Digital Green Certificates. To address the presented issues I suggest to:
- Drop the TAN and public key binding parts from Volume 4 of the specification as well as the reference implementation. Doing so decreases the complexity of the design (Keep It Simple Stupid), increases the usability of the app and aligns security with the paper form of DGCs.
- Have proper security design and analysis be part of the specification, written and reviewed by experts who know what they are doing. Digital contact tracing caught the attention of many researchers, a search on eprint.iacr.org for “contact tracing” gives 33 results, yet it gives 0 for “digital green certificate”. The EU has tons of great researchers and for spotting issues like the mentioned an ordinary IT security student should be sufficient and should spot them.
- Provide better guidance to the developers of the member states not only on the interoperability part, but also on their apps, their security, user experience and usability. If this is the state of the specification the developers are going off, the implementations are going to be different and likely worse (see 2 for an example).
To not have this post be a completely negative one, I will end on a positive note. The transparency of the DGC implementation process is laudable. The specifications are public and readable, the reference implementations along with many components used are open-source, the Github repositories are active with issues and pull requests being handled. Without all of this, this post wouldn’t exist and one could not even begin to look at the security of this system that is being implemented all across the EU.