10 March 2022

How Bluetooth compromises your privacy and security

Smartphone users worldwide have become victims of one of the greatest scams currently active, all in the name of so called health.

One of the purposes for the scam is for data collection on your person and not the alleged health situation, that gained headlines in 2020.


Here is how the Electronic Frontier Foundation described it within an article of the headline:

Apple and Google’s COVID-19 Exposure Notification API: Questions and Answers


Apple and Google are undertaking an unprecedented team effort to build a system for Androids and iPhones to interoperate in the name of technology-assisted COVID-19 contact tracing.

The companies’ plan is part of a torrent of proposals to use Bluetooth signal strength to enhance manual contact tracing with proximity-based mobile apps. As Apple and Google are an effective duopoly in the mobile operating system space, their plan carries special weight. Apple and Google’s tech would be largely decentralized, keeping most of the data on users’ phones and away from central databases. This kind of app has some unavoidable privacy tradeoffs, as we’ll discuss below, and Apple and Google could do more to prevent privacy leaks. Still, their model is engineered to reduce the privacy risks of Bluetooth proximity tracking, and it’s preferable to other strategies that depend on a central server.

Proximity tracking apps might be, at most, a small part of a larger public health response to COVID-19. This use of Bluetooth technology is unproven and untested, and it’s designed for use in smartphone apps that won’t reach everyone. The apps built on top of Apple and Google’s new system will not be a “magic bullet” technosolution to the current state of shelter-in-place. Their effectiveness will rely on numerous tradeoffs and sufficient trust for widespread public adoption. Insufficient privacy protections will reduce that trust and thus undermine the apps’ efficacy.

How Will It Work?

As soon as today, Apple and Google are beginning to roll out parts of the iPhone and Android infrastructure that developers need to be able to build Bluetooth-based proximity tracking apps. If you download one of these apps, it will use your phone’s Bluetooth chip to do what Bluetooth does: emit little radio pings to find other devices. Usually, these pings are looking for your external speakers or wireless mouse. In the case of COVID-19 proximity tracking apps, they will be reaching out to nearby people who have also opted into using Bluetooth for this purpose. Their phones will also be emitting and listening for those pings. The apps will use Bluetooth signal strength to estimate the distance between the two phones. If they are sufficiently close—6 feet or closer, based on current CDC guidance—both will log a contact event.

There are now many different proposals to do basically this same thing, with slightly different considerations for efficiency, security, and privacy. The rest of this post looks at Apple and Google’s proposal (version 1.1) in particular.

Each phone will generate a new special-purpose private key each day, known as a “temporary exposure key.” It will then use that key to generate random identification numbers called “rolling proximity identifiers” (RPIDs). Pings will go out at least once every five minutes when Bluetooth is enabled. Each ping will contain the phone’s current RPID, which will change every 10 to 20 minutes. This is meant to reduce the risk that third-party trackers can use the pings to passively track people’s locations. The operating system will save all of its temporary exposure keys, and log all the RPIDs it comes into contact with, for the past 2 weeks.

Proximity tracking apps might be, at most, a small part of a larger public health response to COVID-19.

If an app user learns they are infected, they can grant a public health authority permission to publicly share their temporary exposure keys. In order to prevent people from flooding the system with false alarms, health authorities need to verify that the user is actually infected before they may upload their keys. After they are uploaded, a user’s temporary exposure keys are known as “diagnosis keys.” The diagnosis keys are stored in a public registry and available to everyone else who uses the app. 

The diagnosis keys contain all the information needed to re-generate the full set of RPIDs associated with each infected user’s device. Participating apps can use the registry to compare the RPIDs a user has been in contact with against the RPIDs of confirmed COVID-19 carriers. If the app finds a match, the user gets a notification of their risk of infection.

The program will roll out in two phases. In phase 1, Google and Apple are building a new API into their respective platforms. This API will contain the bare-bones functionality necessary to make their proximity-tracing scheme work on both iPhones and Androids. Other developers will have to build apps that actually execute the new API. Draft specifications for the API have already been published, and it could be available for developers to use this week. In phase 2, the companies say that proximity tracking “will be introduced at the operating system level to help ensure broad adoption.” We know a lot less about this second phase.

Will It Work?

Several technical and social challenges stand in the way of automated proximity tracking. First, these apps assume that “cell phone = human.” But even in the U.S., cell phone adoption is far from universal. Elderly people and low-income households are less likely to own smartphones, which could leave out many people at the highest risk for COVID-19. Many older phones won’t have the technology necessary for Bluetooth proximity tracking. Phones can be turned off, left at home, run out of battery, or be set to airplane mode. So even a proximity tracking system with near-universal adoption is going to miss millions of contacts each day.

These apps assume that “cell phone = human," but cell phone adoption is far from universal.

Second, proximity tracking apps have to make the profound leap from “there is a strong Bluetooth signal near me” to “two humans are experiencing an epidemiologically relevant contact.” Bluetooth technology was not made for this. An app may log a connection when two people wearing masks briefly pass each other on a windy sidewalk, or when two cars with windows up sit next to each other in traffic. The proximity of a patient to a nurse in full PPE may look the same to Bluetooth as the proximity of two people kissing. Also, Bluetooth can be disrupted by large concentrations of water, like the human body. In some situations, although two people may be close enough to touch, their phones may not be able to establish radio contact. Accurately estimating the distance between two devices is even more difficult. 

Third, Apple and Google’s proposal currently specifies that phones will broadcast signals as seldom as once every five minutes. So even under otherwise optimal conditions, two phones may not log a contact until they’ve been near each other for the requisite amount of time.

Fourth, a significant portion of the population must actually use the apps. In Singapore, a government-developed app has only achieved about 20% adoption after several weeks. As a mobile platform duopoly, Apple and Google are in perhaps the best position possible to encourage the deployment of a new piece of software at scale. Even so, adoption may be slow, and it will never be universal.

Will It Be Private and Secure?

The truth is, nobody really knows how effective proximity tracking apps will be. Further, we need to weigh the potential benefits against the very real risks to privacy and security.

First, any proximity tracking system that checks a public database of diagnosis keys against RPIDs on a user’s device—as the Apple-Google proposal does—leaves open the possibility that the contacts of an infected person will figure out which of the people they encountered is infected. For example, if you have a contact with a friend, and your friend reports that they are infected, you could use your own device’s contact log to learn that they are sick. Taken to an extreme, bad actors could collect RPIDs en masse, connect them to identities using face recognition or other tech, and create a database of who’s infected. Other proposals, like the EU’s PEPP-PT and France and Germany’s ROBERT, purport to prevent this kind of attack, or at least make it more difficult, by performing matching on a central server; but this introduces more serious risks to privacy.

Second, Apple and Google’s choice to have infected users publicly share their once-per-day diagnosis keys—instead of just their every-few-minute RPIDs—exposes those people to linkage attacks. A well-resourced adversary could collect RPIDs from many different places at once by setting up static Bluetooth beacons in public places, or by convincing thousands of users to install an app. The tracker will receive a firehose of RPIDs at different times and places. With just the RPIDs, the tracker has no way of linking its observations together. 

A plain street map with several red dots indicating different Bluetooth pings.

If a bad actor were to set up a Bluetooth beacon or use an app to collect the location of people’s RPIDs, all they would get is a map like this: lots of different pings, but no indication of which pings belong to which individual.

But once a user uploads their daily diagnosis keys to the public registry, the tracker can use them to link together all of that person’s RPIDs from a single day. 

The same plain street map, this time with a line connecting some of the different red Bluetooth ping dots into one person's daily route.

If someone uploads their daily diagnosis keys to a central server, a bad actor could then use those keys to link together multiple RPID pings. This can expose their daily routine, such as where they live and work.

This can create a map of the user’s daily routine, including where they work, live, and spend time. Such maps are highly unique to each person, so they could be used to identify the person behind the uploaded diagnosis key. Furthermore, they can reveal a person’s home address, place of employment, and trips to sensitive locations like a church, an abortion clinic, a gay bar, or a substance abuse support group. The risk of location tracking is not unique to Bluetooth apps, and actors with the resources to pull off an attack like this likely have other ways of acquiring similar information from cell towers or third-party data brokers. But the risks associated with Bluetooth proximity tracking in particular should be reduced wherever possible.

This risk can be mitigated by shortening the time that a single diagnosis key is used to generate RPIDs, at the cost of increasing the download size of the exposure database. Similar projects, like MIT’s PACT, propose using hourly keys instead of daily keys. 

Third, police may seek data created by proximity apps. Each user’s phone will store a log of their physical proximity to the phones of other people, and thus of their intimate and expressive associations with some of those people, for several weeks. Anyone who has access to the proximity app data from two users’ phones will be able to see whether, and on what days, they have logged contacts with each other. This risk is likely inherent to any proximity tracking protocol. It should be mitigated by giving users the option to selectively turn off the app and delete proximity data from certain time periods. Like many other privacy threats, it should also be mitigated with strong encryption and passwords.

Apple and Google’s protocol may be susceptible to other kinds of attacks. For example, there’s currently no way to verify that the device sending an RPID is actually the one that generated it, so trolls could collect RPIDs from others and rebroadcast them as their own. Imagine a network of Bluetooth beacons set up on busy street corners that rebroadcast all the RPIDs they observe. Anyone who passes by a “bad” beacon would log the RPIDs of everyone else who was near any one of the beacons. This would lead to a lot of false positives, which might undermine public trust in proximity tracing apps—or worse, in the public health system as a whole.

What Should App Developers Do?

Apple and Google’s phase 1 is an API, which leaves it to the rest of the world to develop the actual apps that use the new API. Google and Apple have said they intend “public health authorities” to make apps. But most health authorities won’t have the in-house technical resources to do that, so it’s likely they will partner with private companies. Anyone who builds an app on top of the interface will have to do a lot of things right to make sure it’s private and secure. 

Bad-faith app developers may try to tear down the tech giants’ carefully constructed privacy guarantees. For example, although a user’s data is supposed to stay on their device, an app with access to the API might be able to upload everything to a remote server. It could then link daily private keys to a mobile ad ID or other identifier, and exploit users’ association history to profile them. It could also use the app as a “Trojan horse” to convince users to agree to a whole suite of more invasive tracking.

So, what’s a responsible app developer to do? For starters, they should respect the protocol they’re building on. Developers shouldn’t try to graft a more “centralized” protocol, which shares more data with a central authority, on top of Apple and Google’s more “decentralized” model that keeps users’ data on their devices. Also, developers shouldn’t share any data over the Internet beyond what is absolutely necessary: just uploading diagnosis keys when an infected user chooses to do so.

Developers should be extremely up-front with their users about what data the app is collecting and how to stop it. Users should be able to stop and start sharing RPIDs at any time. They also should be able to see the list of the RPIDs they’ve received, and delete some or all of that contact history.

The whole system depends on trust.

Equally important is what not to do. This is a public health crisis, not a chance to grow a startup. Developers should not force users to sign up for an account for anything. Also, they shouldn’t ship a contact tracing app with extra, unnecessary features. The app should do its job and get out of the way, not try to onboard users to a new service. 

Obviously, proximity tracing apps shouldn’t have anything to do with ads (and the exploitative, data-sucking mess that comes with them). Likewise, they shouldn’t use analytics libraries that share data with third parties. In general, developers should use strong, transparent technical and policy safeguards to wall this data off to COVID-19 purposes and only COVID-19 purposes.

The whole system depends on trust. If users don’t trust that an app is working in their best interests, they will not use it. So developers need to be as transparent as possible about how their apps work and what risks are involved. They should publish source code and documentation so that tech-savvy users and independent technologists can check their work. And they should invite security audits and penetration testing from professionals to be as confident as possible that their apps actually do what they say they will.

All of this will take time. There’s a lot that can go wrong, and too much is at stake to afford rushed, sloppy software. Public health authorities and developers should take a step back and make sure they get things right. And users should be wary of any apps that ship out in the days following Apple and Google’s first API release.

What Should Apple and Google Do?

Apple and Google should be transparent about exactly what their criteria are.

During the first phase, Apple and Google have said that the API can “only [be] used for contact tracing by public health authorities apps,” which “will receive approval based on a specific set of criteria designed to ensure they are only administered in conjunction with public health authorities, meet our privacy requirements, and protect user data.” Apple and Google should be transparent and specific about exactly what these criteria are. Through these criteria, the companies can control what other permissions apps have. For example, they could prevent COVID-19 proximity tracking apps from accessing mobile ad IDs or other device identifiers. They could also make more detailed policy prescriptions, like requiring that any app using the API have a clear mechanism for users to go back and delete parts of their contact log. Apple and Google’s app store approval criteria and related restrictions must also be evenly applied; if Apple and Google make exceptions for governments or companies that they are friendly with, they would undermine the trust necessary for informed consent.

In the second phase, the companies will build the proximity tracking technology directly into Android and iOS. This means that no app will be needed initially, though Apple and Google propose that the user be prompted to download an public health app if an exposure match is detected. All of the recommendations for app developers above also apply to Apple and Google here. Critically, the promised opt-in must obtain specific, informed consent from each user before activating any kind of proximity tracking. They need to make it easy for users who opt in to later opt out, and to view and delete the data that the device has collected. They should create strong technical barriers between the data collected for proximity tracking and everything else. And they should open-source their implementations so that independent security analysts can check their work.

This program must sunset when the COVID-19 crisis is over.

Finally, this program must sunset when the COVID-19 crisis is over. Proximity tracking apps should not be repurposed for other things, like tracking more mild seasonal flu outbreaks or finding witnesses to a crime. Google and Apple have said that they “can disable the exposure notification system on a regional basis when it is no longer needed.” This is an important ability, and Apple and Google should establish a clear, concrete plan for when to end this program and removing the APIs from their operating systems. They should publicly state how they will define “the end of the crisis,” including what criteria they will look for, and which public health authorities will guide them. 

There will be no quick tech solution to COVID-19. No app will let us return to business as usual. App-assisted contact tracing will have serious limitations, and we don’t yet know the scope of the benefits. If Apple and Google are going to spearhead this grand social experiment, they must do it in a way that keeps privacy risks to an absolute minimum. And if they want it to succeed, they must earn and keep the public’s trust.

Bluetooth contact tracing is a dangerous security hole

Governments and corporations have implemented a mass surveillance mechanism with total disregard for the privacy and security of those that are in their sights.

Smartphone users have been 'enticed' to use government issued contact tracing apps.

To make matters worse for the user's privacy and security, the smartphone operating system [deliberate] duopoly Apple and Google have forced their 'contact tracing' program upon the users with zero opt out, under the 'health' banner.

It's baked into the operating system and you cannot remove it, unless of course you are running Android AOSP.

Apart from that the contact tracing method is ineffective.

Even after the so called disease is gone, the tracking app is not, where the security hole will always exist on your device.

See post from 2020, where it's a:

"A comprehensive, technobabble free explanation of how Bluetooth contact tracing (doesn't) work and why simple solutions are often not that simple, if not outright dangerous, when applied in real life."

under the headline:

SARS-CoV-2 Bluetooth contact tracing apps are a tremendously stupid idea!

It’s an intriguingly simple concept: when someone tests positive for SARS-CoV-2, quarantine him, get a list of everyone he has been in contact with for the last week, quarantine them as well. Unfortunately, this method doesn’t scale well when done manually and most people won’t know, let alone remember, all the other people they met in the past seven days. However, since (virtually) everybody owns a mobile phone, why not make them simply exchange their owners calling cards automatically via Bluetooth, when coming “in contact” (=2 meters for 10 minutes) with each other?


Of course, simply handing out full contact details to everyone in the vicinity is not a smart idea. The inevitable result would be an inbox full of spam and hoax messages, helicopter parents would spy on their children, jealous spouses will want to know if their partners are cheating, government agencies and law enforcement … to be honest, I have no idea why they should be interested, but surely, they will.

So, “privacy” has to be a build-in feature of the app, but is it possible to be identifiable and anonymous at the same time? As self-contradicting as it sounds, it actually is!

Let there be app!

The basic idea behind the DP-3T protocol as well as Google and Apples joint effort works as follows (simplified summary):

Every smartphone gets a unique calling card number (not connected to anything), which is then broadcasted once per minute via Bluetooth. Whenever a smartphone receives such a broadcast 10 times in a row, with the signal strength indicating a distance of less then 2 meters, it assumes a contact and remembers the transmitted calling card number for the next 7 days.

If a user finds himself infected, he publishes his calling card number to a central bulletin board. All phones with the app installed check the board regularly for calling card numbers, they have seen within the last week. When a match is found, the phone assumes an infection. That is, publishes its own calling card number to the bulletin board and alerts its user to take actions (get tested/quarantined). This forms a simple alarm chain that only passes on an “infected” status, without allowing anyone to find out the identity of the other links.

Clever! But how would this mechanism work in the real world? Story time!

Day 1

Meet Joe Average, a reasonably responsible, reasonably intelligent, everyday person. There is nothing remarkable about him at all. If you were to conduct a scientific study, he’s the kind of guy, you’d want to include.

Today is when, the SARS-CoV-2 Bluetooth contact tracing app becomes available. Let’s see, how Joe spends the day…

08:00
Joe wakes up. A notification on his phone prompts him to install a new app. The description makes sense, so he complies without giving it further thought. In fact, not being too tech-savvy, he completely misunderstands the concept, thinking the app will warn him of infected people in the vicinity.
09:00
Joe's apartment is on the fifth floor. Out of convenience, he takes the elevator down. The idea that someone might have sneezed in the cabin earlier does not occur to him.
11:00
Joe enters a supermarket. He is in need of some toiletries, which he could easily carry in his hands. Nevertheless, the supermarket now has a policy that forces him to use a shopping cart. He wonders if the staff disinfected the handle properly, then decides to grab the cart by the side. Unfortunately, the previous user had the same idea, while the supermarket staff did not.
12:00
A homeless person gets uncomfortably close while asking Joe for some spare change. This is deliberate. The begging community learned quickly that the COVID-19 fear, if played correctly, will increase the success rate for getting a handout.
16:00
Luv u! Licksies?
Joe meets a friend in the park, who's walking "Smooch", his dog. Smooch is a friendly 75 lb Boxer mongrel, who just loves licking faces, but will also happily settle for hands, if faces are not available. Several small children and senior citizens (none of them carrying a smartphone) have petted him today so far. Joe gets the works.
19:00
Joe meets a girl, he'll only ever know as "Suzie" (not her real name) at a bar. She literally wears nothing except a red dress and high heels. There's really no question as to her intents and who's going to pay the tab.
19:30
Common sense and Hormones have a short, but passionate debate. Hormones win with a little help from alcohol.

Joe does catch SARS-CoV-2 Today. When? Where? How? Well, that is everyone’s guess! He certainly had a lot of opportunity.

Assessment The fundamental flaw of Bluetooth contact tracing is that phones, not people, most certainly not viri are tracked. Every moment in the timeline above breaks the alarm chain because a phone was not in the right place at the right time. Of course, having a broken alarm chain is still better than not having one at all, one might say, but not if it comes at the price of people, like Joe, getting careless.

Day 2

Meet Jane Doe, Joe’s next door neighbor. Joe and Jane’s daily routines are vastly different, so they almost never meet each other in the hall. They do have some similarities, though. Like, for example, using their phones as an alarm clock. Also, the bedrooms of their two apartments are separated by the same wall. Whatever they put on their nightstands is pretty much just an arms length apart.

(The rent is about as cheap as this sketch)

Jane got the same notification as Joe, but hesitated at first. She did not install the app until after midnight. Nevertheless, the two phones spent most of that night well within a 2m radius of each other and without any means of detecting the wall in between. Joe might as well have been sleeping with Jane instead of Suzie, as far as the apps are concerned (just one of the many reasons, why privacy by design is a must).

Jane is a biology teacher, teaching a graduation class. Most of her students own (much to her dismay) a smartphone and today is an important exam. Jane, knowing a thing or two about viri, takes reasonable precautions, like wearing a mask and keeping the windows open. However, she can’t prevent her tracer app from picking up a few dozen contacts that day. Of course, this is mutual. Whoever she logs as a contact, logs her as well. Later that day, her students will also log their families.

Assessment Bluetooth contact tracing is hyped as a silver bullet, an alternative to social distancing. It is neither! It merely replaces an effective countermeasure with an inferior one in order to permit risky behavior again. In other words, for policy makers, the availability of Bluetooth contact tracing is an excuse to raise the threshold for what is deemed “dangerous” without actually lowering the risk.

Day 3

It’s John Smith’s day off. He’s a long-distance trucker and parent of one of Jane Doe’s front row students. Their father-daughter day starts off with the two logging a contact for each other.

Assessment At first it may seem as if Jane Doe and John Smith are just two different names for the same function, but they aren’t. She’s a multiplier (spreads to many people locally), he is a bridge (spreads to few people, but across barriers).

Day 4

John starts a new tour. He picks up cargo early in the morning and drops it off in another town after sunset. Since it is too late already to drive back home, he stays at a motel for the night. He could have slept in his truck, but today he is having company. The kind of company that would make him uninstall the tracing app right away, if it didn’t guarantee privacy.

Assessment John is not infected, but part of an alarm chain. He just linked two multipliers in different communities together. Keep in mind that we are only tracking contacts as infections, not actual infections!

Day 5

Joe wakes up, feeling a bit under the weather. At first, he brushes it off, but his condition worsens fast. In the afternoon, he finally seeks medical attention which includes a SARS-CoV-2 smear test.

Assessment Any manual action that is required between suspicion, confirmation and reporting causes signalling delay. In this case, the virus gets another day to spread from anyone, Joe might have infected. This makes all the argument for automatic alarm forwarding, even if false alarms are to be expected.

Day 6

Joe’s test results are back: positive. He does the responsible thing and hits the “I am infected” button in his contact tracing app. Within minutes, the alarm cascades through his contacts and the contacts of his contacts. Everyone who has directly or indirectly gotten in touch (pun intended) with him for the last week receives a message with a simple choice:

Either stay at home for 14 days or pay for a test and stay at home till you have the results.

The social graph of Joe (red), Jane (yellow) and John(blue)

Potentially a few hundred people are going to have a really rotten day. Most of them will have no clue of where they might have caught the bug (after all, that was the whole point of making a contact tracing app, wasn’t it?) or if they actually caught it at all, but now they carry the “infected” status with all the social and legal implications.

Joe would be really unpopular by now if the app did not guarantee privacy.

Assessment An alarm, especially a false one, raises the question of liability. Is Joe responsible for having been careless? Is Jane responsible for causing a false alarm? Is the app maker responsible for the security holes in the protocols design? Fact is, a lot of people will have to drop everything in order to get tested and someone will have to foot the bill. False alarms are a pretty convincing reason to uninstall the app.

Meanwhile in an alternate reality

There are, of course, different versions of the story above. Let’s explore some alternatives by putting Joe (source), Jane (multiplier) and John Smith (bridge) in slightly different roles.

Joe, the hacker
What if Joe stayed home the first day (did not get infected), got hold of Jane’s phone and decides to swat her for fun?
Joe, the slacker
What if Joe was not a neighbor of Jane, but one of her students, desperate to meet a deadline. Could he buy himself an extension by faking an infection?
Joe, the movie buff
What if Joe had invited Suzie to the movies and turned his phone off before entering the cinema hall?
Joe, the deceived
Plot twist: Joe just caught the flu. Same symptons, different pathogen. Should he wait for test results (or be tested at all) before hitting the alarm?
Joe, the unprepared
Joe is single. What if he runs out of food while quarantined? Will he sneak out, leaving his phone at home?
Joe, the kindergarten teacher
What if someone had the idea to reopen kindergarten, thinking the availability of Bluetooth contact tracing renders social distancing unnecessary?
Jane, the hypochondriac
What if Jane had an unrelated symptom, quarantined herself without a test and thinks, she gained immunity afterwards.
Jane, the gym instructor
How many contacts would Jane’s phone log, if she left it in the locker room?
John, the secret agent
Are there countries that would benefit from keeping other countries in lockdown? If so, what could be more effective than interlinking as many people as possible, then sending a fake alarm?
101010, the software bug
Is it possible that a piece of software, especially one, that is based on a bad idea and coded in a hurry, might malfunction?

Every sufficiently large community will have multiple Joes, Janes and Johns. The story above inevitably unfolds, over and over, time-displaced, in parallel and with numerous variations. Some of the story lines will intertwine, others won’t. Every variations adds complexity and requires exception handling.

Privacy aware Bluetooth contact tracing is fragile at best. Even a tiny amount of malevolence or stupidity can easily send waves through the entire system, making it completely unreliable. We are essentially putting our faith in a system that is constantly going to cry wolf. (Repeated) false alarms have consequences:

  • People will stop taking alarms serious.
  • People will uninstall the app.
  • People will try to circumvent the app (Suzie, for example, simply left her smartphone at home).

Worst of all, however, people will demand the app to be fixed and governments will succumb to the sunk cost fallacy.

Privacy has to go!

The privacy aware approach has three major weakness:

  1. The system is open for trolling.
  2. Any incoming alarm must be treated as the real thing.
  3. A false alarm cannot (efficiently) be cancelled.

Obviously, an anonymous bulletin board will not work in real life, so the next version of the contact tracing app will have to be backed by a central authority that knows the identity of every user. Needless to say that this will, quite rightly, creep people out and result in the app getting uninstalled.

Installation is mandatory!

Voluntary use of the app builds on trust. Trust builds on privacy. Privacy cannot be guaranteed. This means, the only way to get the app on people’s phones is by installing it forcefully and making it unclosable (i.e. make it part of the operation system). People who don’t carry a smartphone, or turn it off will eventually find that they may no longer be permitted to enter supermarkets or use public transports.

Of course, the app will still not work properly, as people will make an effort to actively circumvent or even sabotage the system.

No end in sight!

Privacy, schmiracy. Many app proponents are of the opinion that saving human lives is more important than saving human rights. Of course, expressing that opinion requires the basic human right of free speech. So, yeah there’s an interesting discussion starter. Another interesting and more practical question is: for how long are we going to suspend the right to privacy?

The world is full of places with poor medical care. Slums, refugee camps and the like are communities where SARS-CoV-2 can go into hiding and from where it can be re-imported at any time. The pandemic does not really end till the virus is completely eradicated. There are just dormant phases in between outbreaks and those are the ones when we actually need the contact tracing apps to be active.

So, when can we have our privacy back? The answer is pretty much: never.

Conclusion

Bluetooth contact tracing is a dumb idea. At best, it will not work, at worst, it will lead is into a dystopian future.

Listening to scientists is generally a good idea. Some Epidemiologist/virologists may suggest contact tracing apps as a promising approach, but their expertise is in… well, epidemiology/virology, not computer science. Ask a computer scientist for their opinion and the answer is: FUCK NO!

08 March 2022

Rio Tinto slapped with $750k fine

Just a small mention on an big story:


Rio Tinto has been slapped with a $750,000 fine.

The mining company has been ordered by the Federal Court to pay a penalty of $750,000 for contravening its continuous disclosure obligations.

The court found that between December 2012 and January 2013, Rio Tinto failed to disclose to the ASX that mining assets held by Rio Tinto Coal Mozambique were no longer economically viable as long-life, large-scale, tier-one coking coal resources.

“Rio Tinto had obligations to the market to keep it adequately informed about its mining projects overseas,” said ASIC deputy chair Sarah Court.

“When Rio Tinto was aware of information that Rio Tinto Coal Mozambique was no longer economically viable as a long-life, large-scale, Tier 1 coking coal resource, the market should have been properly informed in a timely manner,” Ms Court explained.

“The core of ASIC’s case against Rio Tinto was its continuous disclosure breach and we are pleased the matter has been finalised with a penalty ordered.”

The orders were made by consent after ASIC and Rio Tinto agreed to resolve the proceedings and filed joint penalty submissions. 

The court also ordered, with the consent of the parties, that ASIC’s claims against two former officers of Rio Tinto, Tom Albanese and Guy Elliott, be dismissed and with the parties bearing their own costs.

Source: www.investordaily.com.au