Verge: 12 of the most important questions about the Coronavirus Tracking Project of Apple and Google and the answers

Verge: 12 of the most important questions about the Coronavirus Tracking Project of Apple and Google and the answers

15. April 2020 0 Von Horst Buchwald

Verge: 12 of the most important questions about the Coronavirus Tracking Project of Apple and Google and the answers

 

New York, 14.5.2020

On Friday last week, Google and Apple joined forces in an ambitious emergency project and worked out a new protocol for tracking the ongoing coronavirus outbreak. It is an urgent, complex project with huge implications for privacy and public health. Similar projects have already been successful in Singapore and other countries, but it remains to be seen whether the US health authorities would be able to manage such a project – even if the world’s largest technology companies were to help.

Verge starts by looking at the main features of the project, but points out that there is much more to investigate. This includes the technical documents published by the two companies. „Verge says: „They reveal a lot about what Apple and Google are actually doing with this sensitive data and where the project is failing. So we have delved into these documents and tried to answer the twelve most pressing questions, starting with the absolute beginning:

What does this do?

When someone falls ill with a new disease such as this year’s coronavirus, public health workers try to contain the spread by tracking down and quarantining anyone with whom the infected person has come into contact. This is called contact tracing, and it is a vital tool in containing outbreaks.

The system records contact points without using location data

Essentially, Apple and Google have set up an automated system for contact tracking. It is different from traditional contact tracking and is probably most useful when combined with conventional methods. Most importantly, it can work on a much larger scale than conventional contact tracking, which will be necessary given how widespread the outbreak has become in most countries. Since it comes from Apple and Google, some of these features will eventually be built into Android and iPhones at the operating system level. This makes this technical solution potentially available for more than three billion phones around the world.

It is important to note that what Apple and Google are working on together is a framework, not an app. They take care of the plumbing and guarantee the privacy and security of the system, but leave the creation of the actual apps that use it to others.

How does that work?

Basically, you can use this system to record other phones that were nearby. As long as the system is running, your phone will regularly send out a small, unique and anonymous code derived from the phone’s unique ID. Other phones in range receive this code and remember it, creating a log of the codes received and the time of reception.

If a person using the system receives a positive diagnosis, they can choose to transmit their ID code to a central database. When your phone interrogates this database, it performs a local scan to see if any of the codes in its log match the IDs in the database. If there is a match, you will receive a message on your phone that you have been unmasked.

This is the simple version, but you can already see how useful such a system could be. Essentially, it allows you to capture contact points (i.e., exactly what contact trackers need) without having to collect accurate location data and store minimal information in the central database.

How do you report that you have been infected?

The published documents are less detailed on this point. The specification assumes that only legitimate healthcare providers will be able to submit a diagnosis to ensure that only confirmed diagnoses generate alerts. (We don’t want trolls and hypochondriacs to flood the system.) It’s not entirely clear how this will happen, but it seems to be a solvable problem whether it will be managed via the app or some kind of additional authentication before the central registration of an infection.

How does the phone send these signals?

The short answer is: Bluetooth. The system uses the same antennas as your wireless earphones, although it is the Bluetooth Low Energy (BLE) version of the specification, which means that the battery is not quite as stressed. This particular system uses a version of the BLE beacon system that has been in use for years and has been modified to work as a bidirectional code swap between phones.

How far does the signal reach?

We do not know exactly yet. Theoretically, the BLE can register connections up to a distance of 100 meters, but that depends heavily on certain hardware settings and is easily blocked by walls. Many of the most common applications of BLE – like pairing an AirPod case with your iPhone – have an effective range closer to six inches. The project’s engineers are optimistic that they can optimize the range at the software level by „thresholding“ – essentially by discarding signals of lower strength – but since there is no actual software yet, most relevant decisions have yet to be made.

At the same time, we are not quite sure what is the best range for this type of alarm. Social distance rules usually recommend staying one meter away from other people in public, but this could easily change as we learn more about how the novel corona virus spreads. Officials will also be wary of sending out so many alerts that the app becomes unusable, which could make the ideal range even smaller.

So this is an app?

More or less. In the first part of the project (to be completed by mid-May), the system will be built into official public health apps that will send out BLE signals in the background. These apps will be created by government health authorities and not by technology companies, which means that the authorities will be responsible for many important decisions on how to notify users and what to recommend when a person has been exposed.

Ultimately, the team hopes to incorporate this functionality directly into the iOS and Android operating systems, much like a native dashboard or a switch in the settings menu. But this will take months, and it will still require users to download an official public health app if they need to submit information or receive a warning.

Is this really safe?

Most of the time the answer seems to be yes. Based on the published documents, it will be quite difficult to access sensitive information based solely on the Bluetooth codes, which means you can run the app in the background without worrying about putting together something that could be potentially incriminating. The system itself does not personally identify you or log your location. Of course, the health applications that use this system will eventually need to know who you are if you want to upload your diagnosis to the health authorities.

Could hackers use this system to create a large list of all people who have suffered from the disease?

That would be very difficult, but not impossible. The central database stores all codes sent out by infected people while they were contagious . It is quite possible that a hacker could get hold of these codes. The engineers have done a good job of making sure that these codes cannot be used to directly access a person’s identity, but it is possible to imagine some scenarios in which these safeguards would break down.

A diagram from the cryptography whitepaper that explains the three levels of the key

Why do we need to be more technical? The cryptography specification provides for three levels of keys for this system: a master private key that never leaves your device, a tag key generated from the private key, and then the chain of „proximity IDs“ generated by the tag key. Each of these steps is performed by a cryptographically robust one-way function – so you can generate a proximity key from a day key, but not vice versa. More importantly, you can see which proximity keys come from a particular day key, but only if you start with the present day key.

The log of your phone is a list of Proximity IDs (the lowest level of the key), so they are not much use on their own. If your test is positive, you can tell a lot more by publishing the day keys for each day you were contagious. Since these day keys are now public, your device can calculate and tell you if any of the proximity IDs in your log are from that day key, and if so, it will generate a warning message.

As cryptographer Matt Tait points out, this leads to a reasonable restriction of the privacy of people who test positive on this system. Once these daily keys are public, you can find out what proximity IDs are associated with a particular ID. (Remember, this is what the app should do to confirm the disclosure). While certain apps can restrict the information they share, and I’m sure everyone will do their best, you’re now outside the hard protections of encryption. You can imagine a malicious application or Bluetooth sniffer network that collects proximity IDs in advance, connects them to certain identities, and later correlates them with daily keys that are removed from the central list. It would be difficult to do this, and it would be even more difficult to do it for every single person on the list. Even then, the server would only give you the codes for the last 14 days. (This is all that is relevant to contact tracing, so it is all central database stores). But it wouldn’t be simply impossible, which is what cryptography normally strives for.

To sum it up, it is hard to absolutely guarantee the anonymity of someone when they tell you that they have been tested positive by this system. But in defence of the system, it is difficult to guarantee this in all circumstances. Under social distancing, we all restrict our personal contacts. So if you learn that you have been exposed on a particular day, the list of potential carriers will already be quite short. Add to this the quarantine and sometimes hospitalization that comes with a COVID-19 diagnosis and it is very difficult to maintain complete medical privacy while warning people who may have been exposed. In some ways, this compromise is inherent in contact tracing. Technical systems can only mitigate this compromise.

Also, the best method of contact tracing that we have at the moment is for people to interview you and ask you who you have been in contact with. It is basically impossible to set up a completely anonymous contact tracing system.

Could Google, Apple or a hacker use it to find out where I’ve been?

Only under very specific circumstances. If someone collects your Proximity IDs and you get a positive test result and decide to share your diagnosis, and if they do the whole procedure described above, they might be able to link you to a specific location where your Proximity IDs were spotted in the wild.

However, it’s important to note that neither Apple nor Google share information that you could place directly on a map. Google has much of this information and the company has shared it on an aggregate level, but it is not part of this system. Google and Apple may already know where you are, but they do not associate this information with that record. So even if an attacker were able to access this information, he would still end up knowing less than most apps on your phone.

Could someone use this to find out who I’ve come into contact with?

That would be much more difficult. As mentioned above, your phone keeps a log of all the proximity IDs it receives, but the specification makes it clear that the log should never leave your phone. As long as your specific protocol remains on your specific device, it is protected by the same device encryption that protects your text and email.

Even if a hacker had stolen your phone and managed to break through this security, he would only have the codes you received, and it would be very difficult to find out where those keys originally came from. Without a daily key, they would have no way of correlating one proximity ID with another, making it difficult to distinguish a single player in the jumble of Bluetooth trackers, let alone find out who met with whom. And crucially, robust cryptography makes it impossible to directly deduce the associated tag key or personal ID number.

As cryptographer Matt Tait points out, this leads to a meaningful restriction of the privacy of people who test positive on this system. Once these daily keys are public, you can find out which proximity IDs are associated with a particular ID. (Remember, this is what the app should do to confirm the disclosure). While certain apps can restrict the information they share, and I’m sure everyone will do their best, you’re now outside the hard protections of encryption. You can imagine a malicious application or Bluetooth sniffer network that collects proximity IDs in advance, connects them to certain identities, and later correlates them with daily keys that are removed from the central list. It would be difficult to do this, and it would be even more difficult to do it for every single person on the list. Even then, the server would only give you the codes for the last 14 days. (This is all that is relevant to contact tracing, so it is all central database stores). But it wouldn’t be simply impossible, which is what cryptography normally strives for.

To sum it up, it is hard to absolutely guarantee the anonymity of someone when they tell you that they have been tested positive by this system. But in defence of the system, it is difficult to guarantee this in all circumstances. Under social distancing, we all restrict our personal contacts. So if you learn that you have been exposed on a particular day, the list of potential carriers will already be quite short. Add to this the quarantine and sometimes hospitalization that comes with a COVID-19 diagnosis and it is very difficult to maintain complete medical privacy while warning people who may have been exposed. In some ways, this compromise is inherent in contact tracing. Technical systems can only mitigate this compromise.

Also, the best method of contact tracing that we have at the moment is for people to interview you and ask you who you have been in contact with. It is basically impossible to set up a completely anonymous contact tracing system.

Could Google, Apple or a hacker use it to find out where I’ve been?

Only under very specific circumstances. If someone collects your Proximity IDs and you get a positive test result and decide to share your diagnosis, and if they do the whole procedure described above, they might be able to link you to a specific location where your Proximity IDs were spotted in the wild.

However, it’s important to note that neither Apple nor Google share information that you could place directly on a map. Google has much of this information and the company has shared it on an aggregate level, but it is not part of this system. Google and Apple may already know where you are, but they do not associate this information with that record. So even if an attacker were able to access this information, he would still end up knowing less than most apps on your phone.

Could someone use this to find out who I’ve come into contact with?

That would be much more difficult. As mentioned above, your phone keeps a log of all the proximity IDs it receives, but the specification makes it clear that the log should never leave your phone. As long as your specific protocol remains on your specific device, it is protected by the same device encryption that protects your text and email.

Even if a hacker had stolen your phone and managed to break through this security, he would only have the codes you received, and it would be very difficult to find out where those keys originally came from. Without a daily key, they would have no way of correlating one proximity ID with another, making it difficult to distinguish a single player in the jumble of Bluetooth trackers, let alone find out who met with whom. And crucially, robust cryptography makes it impossible to directly deduce the associated tag key or personal ID number.

What if I don’t want my phone to do this?

Do not install the app, and if the operating systems are updated during the summer, simply leave the „contact tracking“ setting turned off. Apple and Google insist that participation is voluntary, and unless you take proactive steps to participate in contact tracing, you should be able to use your phone without any involvement.

Is this just a stealth monitoring system?

This is a sensitive question. In a sense, contact tracing is surveillance. Public health work is full of medical surveillance, simply because it is the only way to find infected people who are not sick enough to go to a doctor. Hopefully, given the catastrophic damage already caused by the pandemic, people will be prepared to accept this level of surveillance as a temporary measure to halt the further spread of the virus.

A better question is whether this system carries out surveillance in a fair or helpful way. It is important that the system is voluntary and that it does not give out more data than necessary. However, at the moment we only have the protocol and it remains to be seen whether governments will try to implement this idea in a more invasive or overpowering way.

When the protocol is translated into concrete applications, there will be a lot of important decisions about how it is used and how much data is collected outside the protocol. Governments will make these decisions, and they may make them badly – or worse, they may not make them at all. So even if you are happy about what Apple and Google have set out here, they can only throw the ball – and a lot depends on what governments do after they catch it.