How to secure a mobile app against its user?

You cannot.

As soon as the user has the mobile device and your application, nothing stops him from decompiling your application, understanding how it works & what data it sends, and replicating it. They can even cheat using some contraption that rotates the phone around and make your application believe it's a human that is using it.

They don't even need to decompile your application; they just have to put a proxy to intercept the requests and understand the protocol.

From the comments:

If you control the hardware, you can secure the app:

Not quite. Apple controls from the processor to the UI of the iPhone, and jailbreaks are a thing. Even if they are controlling every aspect of it, one day someone jailbreaks and roots the iPhone, and loads your app on it.

Certificate Transparency, Key Pinning

Not useful if the device is rooted. Checksum, digital signature and integrity verification only work if the OS is not compromised. If the user owns the OS and the device, he can disable OS checks, can edit the binary of the app and change the instructions verifying the signature or checksum.

Virtual Machine, Code obfuscation

They make it much more difficult to analyze the code, but code must be executed by the processor. If a disassembler cannot help, a debugger will. The user can put breakpoints on key parts of the code, and in time will reach the function checking the certificate, or the checksum, or any validation checks in place, and can alter anything he wants.

So it's pointless to try?

No. You must weigh the costs and benefits. Only don't count on the defenses to be unbeatable, because every defense can be beaten. You can only make it so hard that the attacker gives up putting lots of resources against your app and receiving a little benefit.


While I generally agree with ThoriumBR's answer, there are some things you can do.

For example, you can analyze the user's behavior for discrepancies, such as:

  1. Obviously Replayed Data

    For example, a user could act in a desired way, then capture the sent data and replay the data again at a later time. This can be determined quite easily, given how the noisy sensor data just happens to be the exact same, which would never happen in a real use case.

  2. Obviously Faked Data

    For example, a user could report fake sensor data. This data would likely not be random enough. For example, instead of reporting a location of 48,7849165°N;27,4159014°W, the faked datapoint could be 48,78°N;27,42°W.

  3. Machine-like Patterns

    For example, a user could write a program that automatically sends noisy and "correct-looking" data at always the same time of day. This seems suspicious, as basically no user would be this precise.

Of course, you can't see these examples as an exhaustive list. There're merely here to serve as examples of what kind of patterns you can detect. Training your system to detect anomalies is a lot harder in practice, and will likely be more difficult to implement than just living with the fact that some people will cheat.


Since the question was edited after the answer was published: You could perform a more thorough analysis of the dataset of the winners to see if irregularities occur. This way, you would only have to perform analysis on data sets which actually matter to you as a company.

As Falco mentioned in the comments, adding a disclaimer such as "Your submissions will be analyzed to prevent cheating" may prevent some people from sending in fake submissions.


While I agree with the other answers, I find there are a few pragmatic things that are overlooked there.

Full disclosure: I work for a company that builds obfuscation / protection software for mobile applications.

Full, unbreakable protection is not possible for an app running on an attacker-controlled device. However, software exists that aims to raise the bar and makes it less / not worthwhile for a person to carry out an attack.

Typically these solutions cover two aspects

Static protection

This usually includes a bunch of obfuscation techniques aiming to make it difficult for an attacker that wants to analyse a mobile application by looking into the binaries using tools like IDA Pro, Ghidra and Hopper.

Techniques here are control-flow obfuscation, semantic obfuscation (class, method, ... names), arithmetic obfuscation, string encryption, class encryption, ...

These make it very difficult to "peek" inside a binary and figure out what is going on, but don't offer a lot of protection when an attacker looks at the application while it is running on the device itself.

Dynamic protection

These are protection techniques aim to shield an application from analysis or modification while it runs on the device. Popular tools here are debuggers (lldb, gdb, ...) and hooking frameworks (Frida, Cydia Substrate, ...).

Techniques here will try to block / detect the use of these tools, detect tampered execution environments (jailbroken / rooted device, emulators), modifications made to the application and much more.

Conclusion

While it's of the utmost importance to ensure your application was built using the well-defined security practices (obfuscation / protection software will not help you here!), tools exist that can function as a bunch of shells around your application that all together make it much more difficult, and hopefully not worthwhile, to crack your application.