Exposed Code in Contact Tracing Apps: Copycats and Worse

Written by

The global discussion surrounding contact tracing apps has long been about the pressing issue of privacy. When it comes to such sensitive information like one’s health condition, it’s not surprising that people become nervous - especially when we see governments opting for centralized implementations that quickly spread the fear of “surveillance states”.

In this ever-complicated question of data privacy, a less discussed topic is the actual security of these contact tracing apps. Those working in application security, myself included, know the absolute mayhem that can erupt when an app is ill-secured. There are often thousands of vulnerabilities for attackers to exploit as well as integrity-related weaknesses that can be abused.

Scam copycat apps

We have just heard the news of 12 fake contact tracing apps targeting citizens in Asia, Europe, and South America, imitating official government apps. These scam apps end up installing malware on the user’s device - namely banking trojans that steal user credentials.

Such copycat apps are especially dangerous because they leverage the brand recognition, trust, and authority of a major entity - which may leave users with their guards down and thus more vulnerable to the attack.

These 12 apps are likely to be just the start of a much bigger wave, as countries are still just starting to adopt their official contact tracing apps. This current situation has presented attackers with a unique opportunity to launch low-tech attacks with potentially huge gains. Financial fraud, ransomware, and identity theft are just a few of many possible high-gain outcomes that attackers now have at their fingertips.

So how easy is it for attackers to deploy these copycat contact tracing apps? Well, to start things off, attackers need access to the source code of the app. Because most contact tracing apps are open-source, getting the source code itself is typically quite easy. This allows attackers to deploy possibly dozens of copycats to dilute the risk of being shut down. Then, the actual distribution of these apps is believed to occur outside of official app stores and instead through third-party apps and websites.

Other attacks that stem from exposed source code

However, spreading scam copycat apps is just one of several possibilities that attackers can leverage when they can copy and change the source code of these apps to their heart’s content.

When an application leaves its source code completely exposed, attackers can engage in attempts to dynamically modify the contents of memory, change or replace the system APIs that the application uses, or modify the application’s data and resources. This is possible because applications, once deployed, and by default, lack any type of integrity verification controls. Exploiting this security weakness leads attackers to covertly hijack the legitimate use of these apps for monetary gain.

This threat is specifically addressed in the Mobile Top 10 project by OWASP, which states that reverse-engineering can be exploited to “reveal information about back end servers; reveal cryptographic constants and ciphers; steal intellectual property; perform attacks against back end systems; or gain intelligence needed to perform subsequent code modification”. This threat is even more relevant in decentralized implementations of contact tracing apps, where all the data and sensitive algorithms all reside inside the client’s device.

The need for application integrity

All of the attack scenarios I discussed before all share a common characteristic: the ability to tamper with the integrity of contact tracing apps.

When the stakes are as high as the case of government-backed apps that handle critical health info and can induce behavior changes on a massive scale, we cannot leave the window open to attacks against application integrity.

Using current technology, the security teams behind these implementations can (and should) add protective layers to these apps’ source code, with a special focus on runtime code protection. For example, code locks can be used to tie-in the source code to a list of allowed environments and Operating Systems and break the app when it is run outside of this permitted list. By locking attackers out, this security layer goes a long way towards preventing low-tech copycats.

Additional runtime defenses can be set up to break app execution when there has been any type of compromise to the integrity of the app. Namely, a runtime protection layer can be used to scatter integrity checks throughout the source code so that, whenever even the slightest change occurs, the app will immediately break, on purpose, and make attackers lose their progress.

I have thankfully been an active member of a Portuguese working group that is diving into the security and privacy issues of an upcoming contact tracing app in Portugal. By raising these questions about app integrity and highlighting the application security recommendations from entities like OWASP, our goal is to make sure that the discussion doesn’t end at centralized vs. decentralized.

Instead, it should urge development teams to consider all security implications and put in place vetted security solutions to guarantee application integrity and shut off hundreds of possible exploits.

What’s hot on Infosecurity Magazine?