termux / termux-app

Termux - a terminal emulator application for Android OS extendible by variety of packages.

Home Page:https://f-droid.org/en/packages/com.termux

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

No more exec from data folder on targetAPI >= Android Q

n0n3m4 opened this issue · comments

The ability to run execve() on files within an application's home directory will be removed in target API > 28.
Here is the issue on Google bug tracker:
https://issuetracker.google.com/issues/128554619
As expected it is yet another "working-as-intended", furthermore, as commented in the issue,

Relying on exec() may be problematic in future Android versions.

This seems to completely break Termux in a long-term, as all its packages contain executables.

This definetly the worst thing that may happen.
Though, in AVD with Android Q Preview Termux still works:

Screenshot_1552922384

Screenshot_1552922415

I brought up that this was forbidden and would likely become part of the SELinux policy last year (9 Apr 2018): #655.

As I stated then,

They rewrote the policy to be much more explicitly about this being disallowed, and it can be expected that they'll not only act on it for the Play Store but via SELinux policy.

They intend to support cases like Termux by having apps like Termux upload their packages as separate apps to the Play Store, with it initiating user installs / uninstalls of those apps.

Staying ahead of the game (i.e. future SELinux policy enforcement) would involve not extracting the executables manually but rather running them from where the apk package installer extracts libraries. Being able to execute app_data_file either as an executable or a library is on the way out in the long term.

It doesn't fundamentally break Termux, but rather Termux would need to switch to an alternative approach for distributing packages. Using apt within the app sandbox to install code isn't going to be viable anymore. The restrictions are no doubt going to be tightened up in the future too, and it can be expected that libraries will need to be mapped from the read-only app data directory or apk too, rather than the read-write home directory working. It wouldn't be necessary to split absolutely everything into assorted apks, but rather there could be apks for groups of related packages based on size. Termux can initiate the apk installs itself too, after prompting the user to allow it as an unknown source. It's an inconvenience for sure, but not a deal breaker.

Someone there accused me of concern trolling, but I was genuinely giving an insider heads up about likely changes on the horizon bringing w^x enforcement for data in storage. I use Termux and also work on Android OS security, so I wanted to have a conversation about how this could be addressed long before it was going to become a problem. I provided enforcement of the same kind of restrictions in my own downstream work, but with a toggle to turn it off for apps like Termux. I would have loved to have an alternate option for installing packages compatible with the security feature, and that would have prepared Termux for the eventual enforcement of this feature upstream.

It's worth noting that there are still holes in the SELinux policy, which can be used to bypass it by jumping through some hoops, but the intention is clearly to move towards enforcing w^x and enforcing the official rules via the implementation. See https://play.google.com/about/privacy-security-deception/malicious-behavior/.

The following are explicitly prohibited:

[...]
Apps or SDKs that download executable code, such as dex files or native code, from a source other than Google Play.
[...]

Termux could also ship the necessary scripting for generating a Termux package apk from executables/libraries so that users could create their own and continue running their own code, albeit less conveniently.

@xeffyr This only applies to apps targeting API > 28. Termux will continue to work indefinitely as long as it continues to target API 28. However, there's a timeline for phasing out old API targets on the Play Store about a year after they're introduced. It then becomes impossible to upload a new version of the app without updating the target API level.

https://developer.android.com/distribute/best-practices/develop/target-sdk

Distributing the app outside the Play Store would still be possible, but the OS has the concept of a minimum supported target API level already (ro.build.version.min_supported_target_sdk). It's currently 17 in Android P and is being increased to 23 in Android Q. It currently only produces a warning, but it won't necessarily remain that way. They don't want targeting an old API level or distributing outside the Play Store to be a loophole for bypassing privacy and security restrictions tied to API level.

@thestinger

Someone there accused me of concern trolling

I apologize for how issue #655 was received, I guess people freaked out and decided to shoot the messenger. We should have moderated it better.

It's currently 17 in Android P and is being increased to 23 in Android Q

I guess we have a year or two (at least) to figure out how to deal with this then

You can probably stay at API 28 until around November 2020 for the Play Store assuming the timeline at https://developer.android.com/distribute/best-practices/develop/target-sdk is followed again for API 29.

It would be even longer until using API 28 broke outside the Play Store via a stricter future version of ro.build.version.min_supported_target_sdk doing more than warning. Probably at least 2 years and likely significantly longer.

So, it's not like it's going to suddenly break when Android Q comes out, since it's only for API 29+.

@thestinger

Termux could also ship the necessary scripting for generating a Termux package apk from executables/libraries so that users could create their own and continue running their own code, albeit less conveniently.

That is horrible in terms of usability. Also wouldn't the apks need to be signed with original apk key?

Not only signed with original key. It seems that executables will have to be placed to native lib directory:

While exec() no longer works on files within the application home directory, it continues to be supported for files within the read-only /data/app directory. In particular, it should be possible to package the binaries into your application's native libs directory and enable android:extractNativeLibs=true, and then call exec() on the /data/app artifacts. A similar approach is done with the wrap.sh functionality, documented at https://developer.android.com/ndk/guides/wrap-script#packaging_wrapsh .

Termux compiled with targetSdkVersion 29:

Termux not working on Android Q


Exec is allowed from native lib directory:
Binaries in /lib folder on Android Q
But this directory is read-only.

Distributing packages in APKs will either require to have hundreds of these APKs published to the Google Play / F-Droid or put multiple packages into single APK up to maximal size (~ 100MB). Technically it is possible to have very large APKs, but they cannot be distributed with Google Play and maybe F-Droid.

But anyway, distributing all packages in APKs is weird:

  • Such distribution may be a bit confusing from user's side.
  • Much longer waiting for updates, in case if distribution is done from Google Play.

Also, we should not forget that user environment will be broken:

  • It won't be possible to compile stuff in the Termux.
  • If somehow managed to compile, this stuff will be useless anyway.
  • Most stuff installable with pip, gem, cpan will be broken.

Python, Ruby, Perl, etc. written by users will still work without a hassle because it's not native code. They won't be able to execute it as ./script.py but they can still run it as python3 script.py. I don't see any way that would ever stop working. Libraries without native code are also going to continue working without being in the native library directory too, since they don't get mapped as executable.

In the future, native libraries will likely all need to be in the native library directory to work but for API 29 they're only enforcing it for executables. I expect the enforcement will happen for native libraries next year, since they already added an auditallow rule to warn in the logs when it occurs.

Distributing packages in APKs will either require to have hundreds of these APKs published to the Google Play / F-Droid or put multiple packages into single APK up to maximal size (~ 100MB). Technically it is possible to have very large APKs, but they cannot be distributed with Google Play and maybe F-Droid.

Yes, that's what I was saying would need to happen: bundling groups of related packages inside of shared uid apks signed with the same key. For users to be able to extend it with their own native code, they would need to have their own build of Termux signed with their own key, along with scripting for generating new extension apks out of their code. There might be a better approach than my suggestion. It was just intended as a starting point.

It will also still be possible to interpret native code from the app data directory. For example, I think valgrind executable_in_app_data will continue working, even though ./executable_in_app_data will not, since valgrind acts as a machine code interpreter. Similarly, running native code inside QEMU would obviously still work.

Is this a potential use case for Dynamic Delivery using App Bundles?

A dynamic feature module could be either a bundled group of related packages, or, if still feasible, each separate package. The submitted app bundle would include all available packages, where the base installation is only core packages, and pkg install would request the dynamic feature from Google Play.

This would likely require a pretty robust build system to keep the Google Play listing's packages in parity with termux-packages, perhaps as a separate listing from termux-core.

I see this as a way to comply with Google Play while still allowing a package manager-like function, potentially with another mechanism for installations outside of Google Play or from source that more closely resembles the current mechanism.

Is this a potential use case for Dynamic Delivery using App Bundles?

It could work, if dynamic feature modules are able to ship additional executables / libraries to be extracted into the native libraries directory. It's not clear if that's possible. It appears the limit is 500MB via an app bundle, rather than 100MB, so it's a limited solution:

Publishing with Android App Bundles also increases the app size limit to 500MB without having to use APK expansion files. Keep in mind, this limit applies only to the actual download size, not the publishing size. So, users can download apps as large as 500MB and, with dynamic delivery, all of that storage is used for only the code and resources they need to run the app. When you combine this with support for uncompressed native libraries, larger apps, such as games, can reduce disk usage and increase user retention.

I see this as a way to comply with Google Play while still allowing a package manager-like function, potentially with another mechanism for installations outside of Google Play or from source that more closely resembles the current mechanism.

I don't think it's going to be able to be much different outside the Play Store, since the SELinux policy restrictions on executing only read-only native code still apply for apps outside the Play Store and Android variants without the Play Store or Play Services. It would make sense to work around this issue at the same time as coming into compliance with the Play Store policy on downloading executable code, but they don't necessarily need to be solved at the same time. I think it would be best to choose a path that will work both inside and outside the Play Store. I don't know how realistic it is to support the app bundle and dynamic feature modules outside Play, since there isn't an existing distribution option with support for it and it's not clear how much is available without Play Services, etc.

Sure--I supposed my suggestion was intended more as a solution to the SELinux policy restrictions by using Google Play, but it does hinge on 1) the ability to ship additional executables (I suspect their example of games would make that true, but it is still a question) and 2) reliance on Google Play as a complete distribution system and being subject to their rules and whims.

It's a difficult problem, no doubt. I suspect that the timing of App Bundles and the native executable restriction policy are not entirely unrelated, and a solution to downloading and executing arbitrary native code without going through Google Play isn't immediately obvious.

bundling groups of related packages inside of shared uid apks signed with the same key.

Seems to be the quickest and simplest solution at this point, with the above mentioned restrictions which may annihilate realistic usability.

What if you bundled PRoot (and busybox and a bare minimum of what is required to get a PRooted system setup and running) with the app itself and then had everything run through PRoot. Could PRoot be modified to allow executing outside of the lib directory? It has been proven twice before to be able to get around limitations of the sdcard being mounted noexec.

This is the most recent example of how PRoot can defeat noexec: termux/proot@ebcfe01

Notice what is not blocked here: https://android-review.googlesource.com/c/platform/system/sepolicy/+/804149

There would be a performance impact of course, but maybe it allows the party to go on.

I work on UserLAnd and we always use PRoot. This is where my mind went when I heard of this limitation. Thought I would share.

If proot works, this means that we can left packages as-is. Proot also means that Termux will no longer be prefixed which is good as a lot of patches can be dropped + less headache with compiling software by user.

Will try to package proot into Termux APK and run on Android Q AVD to test this.

I think that proot (or other theoretically possible packages like this) will be broken no earlier than Google forbids any kind of JIT in 3rd-party applications, so this is a very nice idea. I'm still not sure if Google will ever dare to do such thing.

forbids any kind of JIT

If Google do this, it has to forbid javascript in browsers too.

PRoot will certainly execute from the apps lib directory, just like busybox did above, but you will need to play with the proot version I mentioned above. It is @michalbednarski 's work, so he might be able to comment on how suited he thinks it might be for this purpose.

@corbinlc Original proot worked. Even this anti-noexec extension not needed.

Proot 1

So under proot everything works fine:
Proot 1

No need to distribute packages in APKs or bundles. Everything that is needed is to launch shell under proot.

Notice what is not blocked here: https://android-review.googlesource.com/c/platform/system/sepolicy/+/804149

Note that execute for app_data_file (mapping as PROT_EXEC) had an auditallow rule added after this commit as they're planning on removing that in the near future, perhaps next year. It will currently log audit warnings leading up to the likely eventual removal.

The remaining holes in the SELinux policy are tied to the ART JIT and other JIT compilers: ashmem execute (an implementation detail that could change) and execmem (in-memory rwx code and rw -> rx transitions). ART supports using full AOT compilation instead of the JIT compiler, but it's not their current choice.

Chromium-based browsers including the WebView use execmem in the isolated_app domain, not untrusted_app, so even without an Apple-like approach only permitting the bundled browser and WebView for other apps to use, that wouldn't be a direct blocker. I would be surprised if they removed execmem for untrusted_app in the near future, but it could happen over the longer term, especially if they simply restrict it to a variant of isolated_app.

The original issue that I filed (#655) was about the general problem of this being in violation of their policy, before there was a partial implementation of a technical mechanism enforcing it for native code. There can never be full enforcement at a technical level since interpreters will continue to work even once native code execution is forbidden.

If Google do this, it has to forbid javascript in browsers too.

They permit JavaScript via the exception in the policy for isolated virtual machines:

An app distributed via Google Play may not modify, replace, or update itself using any method other than Google Play's update mechanism. Likewise, an app may not download executable code (e.g. dex, JAR, .so files) from a source other than Google Play. This restriction does not apply to code that runs in a virtual machine and has limited access to Android APIs (such as JavaScript in a webview or browser).

https://play.google.com/about/privacy-security-deception/malicious-behavior/

The result at a technical level would be that they need to keep around execmem for a variant of the isolated_app SELinux domain used by browsers, but not untrusted_app. At the moment, both isolated_app and untrusted_app permit execmem, but Chromium doesn't use it in untrusted_app. ART uses the untrusted_app execmem when configured to do JIT compilation via the relevant system properties. It's fully optional though, and still supports full AOT compilation as was done in Android 6.

@xeffyr I guess that makes sense, since PRoot already deals with execve. The new work is only needed for files on a noexec partition. That is great news for everyone.

@thestinger interesting comment about them potentially removing the ability to mmap(PROT_EXEC) for files outside of the app's lib directory in the future. I guess we live to fight another day (or more likely more than a year).

Do you know if they are planning on preventing mprotect(PROT_EXEC)? Where do you see the audit call you mentioned related to mmap(PROT_EXEC)? I don't know where the these exist in the Android source tree.

Files in the app home directory are labelled as app_data_file and this is the audit rule generating warnings for mapping the data as executable:

https://android.googlesource.com/platform/system/sepolicy/+/08450264ae3f917f6b8e4091d6fedf84ef8d796f/private/untrusted_app_all.te#27

They add auditallow rules leading up to eventually removing it from the policy. You can try loading a library from app data with API 29 and you'll see an audit warning in the logs.

In-memory code generation via rwx pages or rw -> rx transitions is classified as execmem, which is something they'd like to remove in the long-term but it's currently used by the ART JIT. It's not used by Chromium within untrusted_app domains, since it only does that within the sandboxed renderers which use isolated_app domains. They could remove it from the main isolated_app and require that apps opt-in to having it available via a special domain. It should be expected that it will be eventually happen, since this is a very standard form of security hardening and they've been slowly moving towards it.

They already removed execmem for some of the base system including system_server. Their security team would obviously like to remove it everywhere, but for now the performance vs. memory vs. storage usage characteristics of mixed AOT and JIT compilation for ART are winning out over the security advantages of pure AOT compilation with execmem disallowed.

The only reason they're starting with requiring executables to be in the native library directory rather than native libraries (which is the main purpose of the directory) is because that will impact more apps, so the transition will be more painful. Apps can also disable native library extraction and map native libraries from the apk directly, since the linker supports mapping libraries from zips directly as long as they meet the requirements (page aligned and not compressed). They have a pass for properly aligning them in the apk build system.

It will actually log the warning about native libraries in the data directory for all API levels.

Here's where they continue to allow running executables from app_data_file for API < 28:

https://android.googlesource.com/platform/system/sepolicy/+/08450264ae3f917f6b8e4091d6fedf84ef8d796f/private/untrusted_app_27.te#30

Even though it's allowed, they do still log audit warnings about it for API < 29.

Also you can see that for API < 26, even more is permitted, since they've tightened up these policies for API 26+ in the past too:

https://android.googlesource.com/platform/system/sepolicy/+/08450264ae3f917f6b8e4091d6fedf84ef8d796f/private/untrusted_app_25.te

For example, API 26+ forbid execmod which means things like rx -> rw -> rx transitions for files mapped from storage. You can only do those kinds of transitions in-memory via execmem now. They've removed execmem for a lot of the base system but not yet apps, which is as I mentioned primarily due to the ART JIT and it'd also be a major backwards incompatible change. Finishing this up by removing the remaining 3 or so holes in the policy is definitely on their radar. It's just hard to get it done due to compatibility issues and performance/memory/storage vs. security tradeoffs made elsewhere that they'd need to decide are less important (and I definitely think they will, eventually).

This is great info @thestinger

One more thing to mention Google is planning to deny any type of exec, read here: https://issuetracker.google.com/issues/128554619

Relying on exec() may be problematic in future Android versions.

The steps they took:

  1. Deny downloaded code execution using policy
  2. Now denying execution from /data/data/[app]
  3. Next step would probably be denying any exec, removing Runtime.exec / ProcessBuilder, ...

So any solution implemented here will probably not work next year or even earlier. Essentially they will kill apps like Termux, server apps, ... unless you build everything using jni / ndk.

Essentially they will kill apps like Termux, server apps, ... unless you build everything using jni / ndk.

Not kill, if we switch to QEMU (system mode). It should be possible to turn QEMU into shared library (Limbo PC app did this) and attach serial consoles as terminals. Performance will be worse, but still usable.

That is, of course, if QEMU doesn't use mmap(PROT_EXEC), execmem or other similar things which may be blacklisted.

One more thing to mention Google is planning to deny any type of exec

That's not backed up by what you're linking and I don't think it's true.

Deny downloaded code execution using policy

Executing downloaded code was denied by policy for a long time too. I only opened an issue when the wording became substantially stronger and considered malicious regardless of why it's done. I think Termux has been in violation of the policy as long as it has existed, but it became more clear that it was serious a year ago.

Next step would probably be denying any exec, removing Runtime.exec / ProcessBuilder, ...
So any solution implemented here will probably not work next year or even earlier. Essentially they will kill apps like Termux, server apps, ... unless you build everything using jni / ndk.

So any solution implemented here will probably not work next year or even earlier. Essentially they will kill apps like Termux, server apps, ... unless you build everything using jni / ndk.

Spawning processes with exec based on the native library directory is fully allowed by their app policy and documented as a supported feature. A random Google engineer on the bug tracker urging you to be cautious about using exec is good advice and doesn't imply there's any plan to remove it. Mapping code from outside the native library directory and apk as exec is definitely on the way out, and class loading is probably on their radar. There is not much they could do about third party interpreters beyond forbidding it in policy. The general policy is that executing arbitrary code not included with the apk is only allowed if it runs in a well sandboxed virtual machine, with the Chromium web sandbox as a baseline to compare since it's allowed through that exception.

They have no security reason to remove it now that they enforce w^x for it. Relying on executing system executables is a bad idea and it'd make sense for them to continue stripping down what is allowed.

By saying it's problematic, they are not saying that it's going to be forbidden, but rather that it doesn't fit well into a model where apps are often killed at any time and then respawned with the expectation that they saved all of their state to be restored, etc. That model can be more aggressive on some devices, which is what makes it problematic. Switching away from the app can result in all these native processes getting killed, potentially losing data. In practice, it's fine, especially with a foreground service to prioritize keeping the app alive. It also applies to a lot more than native processes. A long-running foreground service doesn't conform to the activity life cycle either. Essentially the same potential for problems there.

Not kill, if we switch to QEMU (system mode). It should be possible to turn QEMU into shared library (Limbo PC app did this) and attach serial consoles as terminals. Performance will be worse, but still usable.

I also don't think it's true.

That is, of course, if QEMU doesn't use mmap(PROT_EXEC), execmem or other similar things which may be blacklisted.

I think QEMU does rely on execmem. I'm not sure that it can be used without it. So, while that will currently work, it may not in the long term. I do think executing files from the native library directory is going to continue working in the long term. Using an emulator is certainly a viable approach though, but obviously slow, especially without execmem at some future point.

The viable long-term approach is using documented ways of doing things, and they have documentation showing how to execute something from the native library directory, along with that being fully supported for native libraries.

Well I wish it was not not true but, doubt it following recent actions by Google / Android.

It's an issue tracker thread aimed at productively finding solutions to future compatibility issues starting with API 29. There's no point of being hysterical / dramatic and treating understandable security changes as an unpredictable, draconian crackdown on apps. The compatibility issue is unfortunate. There are trade-offs to changes like this, and this is the downside to it. The upside is making apps more secure against exploitation, while not hurting the vast majority of them.

It was denied by policy not so long ago - a year or two.

It was the policy that all code was signed and must come from Google Play for a long time. It was stated in multiple locations in different ways, and reworded into stronger wording over time. For example, here's one case that the policy is stated in 2013:

https://web.archive.org/web/20130730033518/http://play.google.com/about/developer-content-policy.html

An app downloaded from Google Play may not modify, replace or update its own APK binary code using any method other than Google Play's update mechanism.

The intent is all apps don't execute code that was signed and delivered through Google Play, other than outside of a sandboxed virtual machine environment (such as web content).

Yes, it has been possible to play semantic games with all variants of the wording, including the very aggressively worded version introduced in the malicious apps policy. It's best to understand the intent of the policy and comply with that, rather than trying to find loopholes. It has been worded different ways even at the same time, on different policy pages.

So essentially strip down everything and allow only some games, ad/spam-ware apps to run, that would be really secure. w^x does not change security in most cases on Android cause most of security/privacy issues are created by pure Java / Kotlin apps.

That's not what I said and I don't know where you're getting that. I said relying on system executables is a bad idea rather than shipping them with the app. For example, using the system toolbox / toybox would be a bad idea even if the SELinux policy allows it. It's not part of the official API.

It remains possible to execute native code, but the implementation permits fewer violations of the policy than it did before, not as an attempt to enforce the policy but to mitigate vulnerabilities in apps.

It's expected that code is shipped in the apk, and either executed from the apk or the native library directory. You aren't supposed to execute Java code from outside an installed apk / app bundle component either. That isn't impacted by SELinux policy right now. They could teach the class loader to enforce rules, but as I mentioned, they fundamentally cannot prevent people from breaking the rules with interpreters via a technical approach.

As I explained above, this has nothing to do with protecting the system from apps or protecting user data from apps but rather improving security for apps themselves. That's certainly something important and is a real world problem. There have been many cases of apps exploited due to them executing code from app data. They introduced a partial mitigation for it, in a way that only truly breaks compatibility with apps they'd already defined as violating their policies. That's why it made sense for them to do this. Yes, it's unfortunate for Termux, but they aren't doing it to be evil.

You mean "problematic in future" Runtime.exec? :)

I mean exactly what I said. Apps executing system executables are relying on a non-public API and can definitely expect problems, just like using private APIs elsewhere. You're reading something and twisting the meaning into what you want to see based on your already negative outlook. You view these security improvements as something bad. It's a balance between different needs, and the balance is moving towards security. That will make some people upset, but it's what most people want. There has always been this balance in place, and most people think Android was way too permissive with apps and needs to continue changing.

I think the only viable in future approach is compile everything using NDK and execute using JNI, but in light of Fuschia / Chrome OS changes it is very unclear.

Fuchsia is not Linux and is effectively a different platform. They may end up implementing a very complete layer for Linux binary compatibility. The CTS requires Linux compatibility so if they want full Android support they need to provide it. That's their problem to figure out. Executing native code is still possible there, but it's not Linux and currently isn't compatible with Linux at the ABI or API level. ChromeOS simply ships Android as a whole in a container and provides an Android compatible Linux kernel. It's just another kind of Android device and is CTS compatible.

The low-level portions of Android like Bionic and SELinux policy are entirely developed in the open rather than having code dumps. I also talk with members of their security team fairly regularly. It's not a mystery what they are working on and where they are planning on taking it in the future. It's not surprising that they partially implemented a very standard security mitigation used across other operating systems in various forms (not just iOS), and it's in line with their existing policy. You can look through the SELinux policy repository and see commit messages and code comments describing many of the future breaking changes.

They released the v2 sandbox (targetSandboxVersion) with Nougat and among other things it disabled app_data_file execution as a whole. That was essentially a preview of future changes.

There are a few long-term options. One is figuring out a way to ship code as part of apks and a way to offer users the option of generating an apk wrapping their native code in a usable way. Another is using a virtual machine to run everything, which would also work on non-Linux platforms, but supporting those is probably never going to be necessary. Those platforms would need to implement Linux compatibility to be Android compatible. This would be slower, especially if support for using a JIT compiler in untrusted_app goes away. It could be moved to isolated_app, but that would be hard.

@esminis Your previous comment seem to be mostly a rant. If you are unhappy with the road google is taking then I suggest you either complain to your local google office or stop using (official) android altogether, complaining here won't change anything.

You can do any mental gymnastics you want here but it does not forbid downloading and running third party code, only modifying already installed APK. And yes later they updated policy with malicious behavior, that denied third party code download / execution.

It's not possible for an app to modify the installed APK at a technical level. The policy used to state that apps couldn't modify, replace or update the code (extending it counts as updating it too, as bringing in the new code is an update). Apps obviously can't touch the APK itself, since the OS doesn't permit them to do that. Their policies are not listing out things that are implemented by the OS sandbox and permission model, because you're forbidden from exploiting it in general.

I think it's ridiculous for you to make these disingenuous misrepresentations of the policies and then claim that I'm the one doing mental gymnastics. It would be completely fair to disagree with the policy and argue that the drawbacks of the limitations it imposes are not worth the security advantages that it provides. That's not a debate about the facts but rather the value of allowing apps in the Play Store to do whatever they want vs. them policing it. Part of them being able to police it is being able to review and analyze the code used by apps, which they can't do if the code is dynamically downloaded from outside the Play Store. Separately from that, having non-read-only code is a security issue in general, and the system packaging system provides substantial security by not allowing apps to write to their code, including native executables and libraries it has extracted. When installing apps with apt on a system like Debian, the same thing is true, and in general very few apps will install code to your home directory because it's a bad practice with security issues. Disabling in-memory and on-disk code generation to improve security is an extremely common practice on traditional Linux distributions too. Even web developers without systems programming knowledge are taught to make their server side code read-only for the web / application server user, so a file write vulnerability is not arbitrary code execution. It's part of security 101.

I certainly agree that the policy is unfortunate for use cases like Termux and that it will be painful to adapt it to an approach like apk-based packages. I do think an approach like that is viable and it wouldn't make using it substantially worse unless you need to compile / download and run native code for development. It won't impact development in scripting languages like Python, other than installing libraries with native code, which could be packaged by Termux via apks in the same way. It does definitely throw a wrench into native code development workflows, since you would need a system for generating an apk packaging up your code. I don't think that's completely awful though, and it would be neat to be able to generate a standalone apk out of the code not depending on Termux, which is definitely feasible. It could package Termux into the generated apk with the user code.

Yea that is the main point. Road to hell is paved with good intentions - earlier call recorders, now termux, servers, ... . For those who came to Android because it was platform that would allow doing advanced stuff those changes are big evil

Call recording is still often provided by the base OS along with screen recording capturing audio. It's also worth noting that only the native telephony layer calls are impacted at all. An app like WhatsApp or Signal is fully allowed to provide call recording for their own calling implementation if they choose. There are regulations on the legacy telephone infrastructure for call recording, and they are expected by the authorities in some regions to prevent it or at least to add a notice that the call is being recorded. I suggest avoiding telephony layer text and voice calls in the first place because they aren't secure and in practice many people / organizations are able to listen in on them if they choose.

Apps like Termux are still allowed and possible. The part that's disallowed is the internal apt-based package management, but it was also disallowed since at least 2012 anyway. The SELinux policies just hadn't advanced to the point that they made the assumption apps didn't violate the policy to implement a mitigation.

Of course bad because it is taking freedom due to security. In my dictionary it is never good. Of course it would be possible to implement properly by letting user choose he wants app executing external code or not - simply add a permission and issue is resolved (same goes to other breaking changes that users are still unhappy about - like SAF, call recorders, ...).

That model doesn't work, even for experienced power users and developers. Apps are in a position to coerce users into accepting permissions. They choose not to implement fallbacks. For example, an app could respond to being denied the Contacts permission by using the API for having the user pick contacts to provide to the app which requires no permission. Are you aware of any app doing that? The same goes for storage access and camera access. They could ask for what they want and then fall back to a working implementation without a bulk data access permission. However, they don't, as they know users have to say yes to get done what they want to get done. The alternative is removing an app, which is often not even an option unless you want to lose your job, make your friends upset, etc. A better way of doing it is having the user in control of exactly what is granted, like picking which contacts (or groups of contacts) an app is provided with, or which subset of the storage is provided (SAF), etc. Android has had the better way of doing things in many areas for a long time, such as SAF being introduced in 4.4. It's no good if it's not what the ecosystem uses, even as a fallback.

I'm not sure why people would be unhappy about the storage changes, since those only put users in control and they can still grant apps access to everything. Look at Termux itself for a good implementation of part of SAF. It has a storage provider exposing the internal storage to the file manager. You can move files / directories in and out of Termux without trusting it to access anything else. The other part of the picture is apps requesting access to external files / directories outside of their own scope, which is something Termux could do, although I don't think it's worth implementing. The existing implementation is already solid.

Yea you sure - who are those people who want that?, maybe same people that wanted SAF forced on them or denied call recording (only read angry comments about those).

I certainly care about the privacy and security of my device. I've always been incredibly unhappy with the way that external (as in shared between apps) storage worked. The storage changes as fantastic from my perspective and don't eliminate any capabilities. Termux won't have an issue with that. It can choose to provide more SAF-based functionality or not, since the existing functionality is already fine.

No there are not - you can`t be sure any of those will be allowed in future as it seems to have become unpredictable. One rule change and Google will deny any apps that include emulator or packing binary code in lib dir, only loading .so file with jni.

It's not at all unpredictable. There was no change in policy beyond more aggressive wording and considering apps violating it to be malicious. You'd probably argue that their new wording allows apps to add and run new code dynamically outside a sandbox because they specifically state 'downloaded' but the intent of the policy is as clear as it has always been. Trying to find loopholes in the wording doesn't work with these things. It's not code or a technical implementation of rules like SELinux policy. It works like a code of law, where it is interpreted by humans and intent of the creators of the law is what matters. You have the option to avoid it by avoiding the Play Store, in which case you're only restricted by the OS implementation, but whenever possible that implementation will enforce or take advantage of their policy for apps. They won't go out of the way to leave out a nice security feature like this which doesn't heavily impact compliant apps.

? Or maybe if you have some inside info on what will be forbidden in near / further future you can share

It's not really insider information, since most of this stuff is developed publicly and the developers are quite willing to talk about it and take suggestions. I mentioned where things are heading above: there are auditallow rules for mapping app_data_file as executable so that's on the way out, and the same can be expected for execmem in the long term, but it's blocked by ART using it. If ART didn't use it, I'd expect there would be an auditallow rule already. In Android 6, it only used AOT compilation, so it didn't need that. That's a design compromise their security and runtime teams will need to work out over time. They already got rid of execmem for a lot of the base system, by relying only on AOT compilation for parts of it. I'm sure they will continue allowing execmem (i.e. in-memory code generation) within a sandbox (currently isolated_app, but it could become isolated_app_execmem or whatever with opt-in to it) since they definitely want to allow things like JIT engines for browsers - but that doesn't mean they need to allow it outside of a sandbox. Once Firefox has a proper sandbox, that removes another big blocker for this.

I haven't been very active in the Android privacy and security world lately, due to very unfortunate events that occurred (nothing to do with the Android development team), so I wasn't following this and didn't have input into it. However, it is something I've worked on directly in the past. In my work, I implemented a complete version of what they're headed towards: fully forbidding any dynamic native code and requiring it to be in the apk / native library directories. I provided a system for making exceptions, but I wasn't in a position to get apps to move their libraries, and I have a different approach with different trade-offs. I filed the original issue last year to act as an early warning and to encourage considering other ways of distributing code that would be compatible with hardening like this, so Termux can exist on a system with that hardening whether or not Google did it.

Anyway, what I want is a system with great privacy and security but also a working Termux app. I think that both are possible, although it requires some thinking and a lot of implementation work. Luckily, there's a lot of time to become compatible with this change. Termux will still work on Android Q with API level 28. It becomes a blocker for Play Store updates over a year from now, and it becoming a blocker for sideloading the app is at least several years away.

The main impact of moving to apks would be download size (since package granularity with likely be far coarser) and the inconvenience of native code development. One way to make native code development more practical would be having a script for self-building Termux and your used packages, in order to have it signed with your own keys, so you can generate your own extension apks with your native code, including within Termux itself.

In my opinion, that's really not the end of the world.

@xeffyr Using QEMU is a good idea (it seems to be much more reasonable, than asking to build an APK each time user wants to build something from source), but I think that there is more potential in using isolatedProcess and just bridging all required system calls using proot.
This will not harm performance and will let Termux remain the same in a long term. Furthermore, formally it will be "sandboxed" and it will be perfectly legal to download any kind of binary code.
Google will definitely not block JIT in isolated_app, as this will cause an antitrust case with Firefox authors (and probably, somebody else) immediately.

it is possible to execute binaries from noexec directories using proot

Yes, see #1072 (comment) - in case of Android Q.

Regarding /sdcard, proot should be built with special extension: termux/proot@ebcfe01. Note that this extension is disabled by default.


Proot will work until Google will go beyond just blocking execve().

execve()

There other ways to execute machine code from ELF binaries, not only execve().

Something like reading file and executing it from memory - shmem, mmap, fexecve?

@thestinger Do you know if there are plans to disallow dlopen() on shared libraries extracted to application data (not /lib) ? For now it is working, I even don't see any warning or avc denials when doing it.

you are one of several people I know that likes SAF - have you ever tried using any file managers? in most cases only advanced users can use external SD card in file managers due to SAF

This claim doesn't make any sense. If an app implements SAF for loading and storing files, the user is presented with the system file manager API and is in full control. They can save / load from external storage or alternative storage providers like the one provided by Termux. They can provide persistent access to directories for the app to do whatever it wants inside those. Have you not ever used the standard Android Files app and the features of various apps implementing SAF as clients and providers, including Termux? It doesn't seem like it.

Sure, if apps haven't adopted the SAF API introduced in 4.4, they won't see external storage or alternative storage providers like Termux, since that's how those are exposed... I'm not sure how that is the fault of the API they should be using. Most apps aren't since they'd rather just be write insecure, privacy invasive software not respecting consent. It's easier to just ask for bulk data access in the legacy way, and it should be no surprise that the option of doing that is finally going away with users left in control. If you want to provide an app with full access to everything, SAF lets you do that. Adopting SAF would have given apps more capabilities like being able to load / store to an external storage device, the internal Termux storage, and other storage providers such as those exposing cloud storage directories, network shares, etc. It doesn't take away capabilities but rather abstracts file access in a way that makes it more powerful while giving the user much more control, by being able to define exactly how narrow / broadly the app can access files. Most developers certainly aren't fans of giving the user control especially when they need to do work to provide it, along with putting in the minimum effort possible to adopt new APIs, even when it would substantially improve the app. They still aren't forced to adopt it in Android Q, but they can't get access to the entirety of external storage anymore rather than what the user wants them to be able to access, which can be everything, but it probably isn't what users want so they won't do that.

And no it is absolutely not security 101 for web to make files read-only, cause if website already has remote execution issue, read-only access wont help much

You're once again completely misrepresenting what I said.

on webdev it is much more critical to properly filter input/output if you don`t do that there is absolutely no difference in other security.

It's obviously important to apply good development practices and to avoid vulnerabilities whenever possible, but bugs will still happen anyway. Using tools able to not just prevent many classes of vulnerabilities and also to mitigate their impact when they aren't outright prevented is important. What I said, which you misrepresented, is the following (simply quoted as is):

Even web developers without systems programming knowledge are taught to make their server side code read-only for the web / application server user, so a file write vulnerability is not arbitrary code execution. It's part of security 101.

The entire existence of features like Content Security Policy is defense in depth. If you don't understand that concept, I really don't know what to say. The principle of least privilege and defense in depth are such basic concepts that you should really know and understand them without having any involvement in software development. It's basic rational thinking. If you don't provide the capability to cause harm, you prevent harm from being caused even if something does go wrong which it often will despite substantial effort to prevent it.

Also this is not the place to argue about privacy / security vs freedom, especially when one thinks it is important to have secure app that does nothing and other thinks that app / platform that does something useful is much more important, then how secure it is.

It's you coming here disrupting a productive discussion with your misinformation. I'm not going to leave the falsehoods uncorrected when they are so harmful. The storage changes in Android Q don't reduce the capabilities of apps. It puts users in control. They can choose to give an app access to all the directories in shared storage, or they can choose not to do that. The app doesn't get to just ask for access to everything via a prompt, but rather the user chooses the level of access in the system file manager UI for the app. I don't see any issue with that, and you either don't know what you're talking about or you're just lying to push your negativity.

It has nothing to do with this issue and I don't know why you keep going on about it from your position of complete ignorance. I'd suggest educating yourself about the very basics of what you're talking about before trying to debate the merits of it. It helps no one and only makes you look foolish.

Proot will work until Google will go beyond just blocking execve().

It will work, but copying data into rwx memory would be very inefficient along with being a security issue. When executables and libraries are used normally, they get paged from storage as needed. Only the pages that are actively used need to be in memory and can be paged out. Copying the data in-memory results in the entirety of the code being stuck in memory as dirty pages that can't just be dropped but rather need to be preserved. Copying everything into memory to execute it isn't going to scale well beyond small programs and libraries.

Do you know if there are plans to disallow dlopen() on shared libraries extracted to application data (not /lib) ? For now it is working, I even don't see any warning or avc denials when doing it.

Yes, as I mentioned, they added an auditallow rule for app_data_file as execute. That means it will generate audit warnings in the kernel log on Android Q. The holes in the policy are known and are slowly being addressed. The others are ashmem/tmpfs execute and execmem. Injecting new native code is something they can and are working on preventing via SELinux policy. My previous comments elaborated on this along with the current blockers for some of it like removing execmem.

My first comment mentioned these current holes (which are officially known, and commented on) and that they are only taking one more step towards enforcing prevention of dynamic native code. They took other steps earlier, like removing execmod and various other legacy execute permissions. Trying to use these known holes in the SELinux policy doesn't seem like a viable long-term approach.

They cannot prevent class loading or other interpreted code from being run from app_data_file via SELinux policy, but they can certainly do it for native code, and they've been working towards that for a long time. It's finished for a lot of the base system already. Doing it for third party apps is harder due to compatibility issues, since apps need to migrate to the officially supported way of doing it. That's why they are doing it so gradually and you hadn't noticed until the latest set of changes.

Their policy for malicious apps forbids far more than they can ever enforce via SELinux policy. Something being allowed by SELinux policy doesn't mean that it's permitted or ever was permitted. The reason they implemented this SELinux policy change is also not to forbid apps from doing something to enforce their malicious app policy. It wasn't a change to protect the system from apps, but rather to make apps themselves harder to exploit. Their higher level policy on code from outside of apks allows them to make the change without worrying they've prevented doing something that wasn't disallowed already.

@thestinger
So, do you know a way to use fopen (the most proper file access way in C) on files via SAF, if it is so mature, properly implemented API?
Passing file descriptors via unix sockets doesn't give direct access even to the allowed files (you can't pass a directory file descriptor, as you cannot get it). Import / Export pattern doesn't give direct access to the allowed files by its definition.
P.S. this is off-topic, even though Android Q-related, so a short answer is preferred.

Copying everything into memory to execute it isn't going to scale well beyond small programs and libraries.

Luckily, modern devices have enough RAM for this. Measurements should probably be done, though.

So, do you know a way to use fopen (the most proper file access way in C) on files via SAF, if it is so mature, properly implemented API?

SAF provides access to more than regular local files and provides a more sophisticated access model than the Linux file access API is capable of representing. SAF is used for network shares, cloud storage and many other cases. Linux doesn't have the ability to revoke access to files, which exists on some other operating systems (https://lwn.net/Articles/546537/), and it also doesn't have a capability system for file descriptors like Capsicum (https://lwn.net/Articles/482858/). It doesn't make sense to expect the libc fopen implementation to fit into this. On the other hand, you can definitely make an implementation of the libc stdio API with SAF compatibility.

Termux already implements a storage provider itself, so other apps can access the internal data via SAF and the user can access it directly via the file manager too. It could provide a client side SAF implementation by having a command for mounting SAF files / directories within the file system hierarchy. That could be implemented by providing a more sophisticated stdio and POSIX file access implementation with support for a virtual file system for SAF. So, you could run a command like smount /mnt/cloud_storage, which would open up the system file access API, where you could select an SAF provider from a cloud storage app. You could also use it for local access, but it already has direct access to data scoped to itself that it exposes to other apps.

Most apps don't have any need to worry about these things, because they don't have a reason to expose higher-level functionality via a traditional command-line environment.

Passing file descriptors via unix sockets doesn't give direct access even to the allowed files (you can't pass a directory file descriptor, as you cannot get it).

The non-proxied file descriptors wouldn't provide you with access even if you could obtain them, since that's forbidden by SELinux policy. SAF does provide file descriptors for the proxied file handles.

It can't pass you raw file descriptors from the other side because POSIX. It's a higher-level API than raw POSIX file access where these things are implemented.

A compatibility layer is certainly possible, but as a lossy abstraction, like representing it as virtual mounts in the file system tree, and just handling all the management (requesting access, noticing access was revoked, etc.).

Luckily, modern devices have enough RAM for this. Measurements should probably be done, though.

It means if the executables and libraries add up to 300M, it will use 300M of memory, with the OS unable to page it out. It also only bypasses this initial incremental step they took towards improving the SELinux policy along with the planned follow-up step of disallowing app_data_file execute. It doesn't fix that it's a violation of the app policy and won't survive the eventual completion of the SELinux changes.

If they end up accomplishing what they started on, which involves removing about 3 more of the SELinux rules, injecting native code in untrusted_app won't be possible: ashmem/tmpfs execute, app_data_file execute and in the longer term execmem which would almost certainly remain for isolated_app or a variant of it. They could keep the ART JIT compatible by generating code in a separate process, similar to the Microsoft Edge approach, or they may end up deciding it isn't worth having due to them increasingly shifting the balance slightly towards security in these compromises. Chromium isn't impacted, since the renderer uses isolated_app. Firefox would only be impacted if it doesn't use a basic sandbox by then, and there aren't other browser engines of note not based on one of these. There are other JIT compilers, but they could be expected to use isolated_app sandboxes too.

So, it seems like the core function for accessing files in C/C++ is broken with nothing in return (that aforementioned "compatibility layer" wasn't provided by Google since the release of Android 4.4). This is exactly why nobody likes SAF, all other benefits like remote mounts are not important.

It means if the executables and libraries add up to 300M, it will use 300M of memory, with the OS unable to page it out.

This isn't really a concern provided that modern phones have 4GB+. Anyway, this is the only choice that keeps end-user experience the same with non-rooted device.

If they end up accomplishing what they started on, which involves removing about 3 more of the SELinux rules, injecting native code in untrusted_app won't be possible: ashmem/tmpfs execute, app_data_file execute and in the longer term execmem which would almost certainly remain for isolated_app or a variant of it

That's why I suggested using isolatedProcess for executable code + proot to intercept and bridge syscalls that require the help of untrusted_app. This solution is long-term and has good end-user experience. Provided that bridge layer will implement some security measures, it will become a secure sandbox (with "limited access to Android APIs"), so there will be no Google Play policy violation. Even if Google considers this a violation, users are always more important than disputable administrative decisions.

So, it seems like the core function for accessing files in C/C++ is broken with nothing in return (that aforementioned "compatibility layer" wasn't provided by Google since the release of Android 4.4). This is exactly why nobody likes SAF, all other benefits like remote mounts are not important.

Apps can still access files in both internal and external storage directly, and the user can access their directory in external storage via the Files app. It's not being broken, and SAF isn't involved in those basics. It's one of the options available to apps.

The primary purpose of SAF is providing a privacy-respecting way of accessing external data, where the user is fully in control and can choose the scope of access along with revocation being possible. It has to abstract storage access to provide that, and as part of that provides the ability to represent arbitrary forms of storage via the API rather than only subsets of the local file system.

You're assigning zero value to users having privacy and precise control over the scope of app data access, rather than it being all or nothing access to external storage where users are coerced into giving up their privacy and security simply to use many apps at all.

This isn't really a concern provided that modern phones have 4GB+. Anyway, this is the only choice that keeps end-user experience the same with non-rooted device.

Sure, I can agree with that, but it has significant drawbacks (substantial performance hit on system calls from ptrace, memory usage, and the security drawbacks of using rwx memory). There are some other issues like the fact that it has to emulate ptrace, since it can't be nested, and it's not a fully complete implementation of it, which could impact debugging.

That's why I suggested using isolatedProcess for executable code + proot to intercept and bridge syscalls that require the help of untrusted_app. This solution is long-term and has good end-user experience. Provided that bridge layer will implement some security measures, it will become a secure sandbox (with "limited access to Android APIs"), so there will be no Google Play policy violation. Even if Google considers this a violation, users are always more important than disputable administrative decisions.

Yes, I think that's a viable long-term option, and I doubt that they would consider it a violation of the policy. It's allowed as long as it's in a meaningful sandbox, and having it in isolated_app with proxied access to a sub-directory of the Termux data directory could be considered good enough, even though it still has fairly direct kernel access, etc.

The reason for the policy is to require that all code used by apps outside a sandbox is uploaded for review and analysis. It's a prerequisite of enforcing other policies. They aren't currently doing much enforcement as a whole, which is why it hasn't been noticed or enforced for Termux. That could change suddenly, of course.

CSP is essentially only defence for frontend (XSS for example), that is last thing you want to look at when defending against dangerous attack vectors. Yes it can be one of defences but it is surely not something of high importance for defending your server and your users data. If you mention it as high importance defence I really really hope you don`t manage any servers or handle user data...

I used CSP as a separate example of defence in depth, and it's definitely an important thing to implement for client-side web security. It's one of the few forms of defence in depth that are available there. It's quite comparable to implementing restrictions with SELinux policy on a server, and can implement comparable mitigations like disallowing code injection to avoid escalation of many bugs into code execution. It being defence in depth doesn't mean it isn't important. There are many examples of important defence in depth security features.

ASLR, NX, stack/heap canaries, CFI, memory allocator hardening, memory tagging, pointer authentication, etc. are all defence in depth for memory corruption vulnerabilities. Disallowing execmem is one of those too. It's obviously best to avoid having the memory corruption bugs in the first place, but that doesn't mean it isn't important to make as many classes of those bugs as possible unexploitable and to raise the cost of exploiting others. Even in a memory safe language, which avoids the bugs in the first place, there can be memory corruption bugs caused by bugs in the compiler, runtime or native library dependencies. They are not going to be avoided as a whole, thus efforts to add multiple layers of defence in depth alongside efforts to reduce the occurrence of vulnerabilities and discover existing ones.

I'm quite amused by you trying to question my competence for giving CSP as a well known example of defence in depth though. If you don't understand the concept, I don't really know what else to say. I tried my best to explain it and offer some relevant examples, like the well known guidance of implementing w^x policies on web / application servers, loosely comparable usage of CSP for client-side web content and of course native code. It's something very standard and highly valued. Preventing a file write vulnerability from becoming code execution doesn't mean you stop worrying about preventing file write vulnerabilities. They still matter, but it doesn't mean you don't try to reduce their impact and contain them when they do happen. An app extracting a zip file in an unsafe way should not lead to trivial code execution, but yet it does if they manually extract their libraries to app data instead of using the native library directory.

The principle of absolutely no direct access is best defence, not least privilege as privileges can be escalated.

I'm unsure on the relevance to the discussion. As I stated before, the importance of avoiding vulnerabilities doesn't mean it isn't also important to mitigate them when they do happen. Preventing a file write vulnerability from escalating into remote code execution or dumping data from a server is very useful. I don't think it's productive to go through basic obvious principles of security here, or to try explaining the need to compromise between different design goals. If you don't get it, that's fine, but there is no point in discussing it further. It's not the topic.

It is good development practices to keep backward compatibility and not to remove features only due to security.

That's not an attitude compatible with making systems with decent privacy and security. Closing security holes is one of the few reasons that backwards compatibility is broken by software like the Linux kernel and Android. It's not the topic at hand, but reducing attack surface is a very important part of securing a system too.

Software development involves balancing competing concerns and making design choices that achieve good compromises between them. The competing concerns are evolving and not something static. If you don't value privacy and security, it's understandable that you're upset when the balance shifts towards those. What's strange is an inability to see that these are valid, useful changes even if you disagree with the design choices. I do things differently than Google in my privacy and security hardening work and generally preserve compatibility via adding user controlled toggles to disable features, but there are major disadvantages to that.

Yes if you don`t know Android user base / release history it does not make sense. "Scoped Directory Access" was introduced only in Android 7, and till now >50% of users are stuck without it, so it is not so easy to give permission to access external shared storage.

This has nothing to do with what you're replying to.

You are completely misrepresenting facts - no I have not mentioned any shared storage changes in Android Q only about internal storage exec permission changes. From what I know there are no big SAF changes in Android Q.

No, I'm not misrepresenting facts. You're trying to project what you're repeatedly doing onto me is something that other people are able to easily see through. It's you who brought up these other topics in your poorly informed, inaccurate rants about other topics. Are you now going to deny that? Wow. You are the first person to mention SAF in this thread. The discussion was previously focused on the topic at hand: the app policy forbidding non-sandboxed executable code that isn't from the apk and more specifically the introduction of SELinux rules forbidding dynamic native code execution violating that policy (currently only for running executables directly, but with more changes lined up and the predictable closing of the other remaining holes further down the road).

If you don't want to be called out on your bullshit, don't do it in the first place. I don't understand what you hope to accomplish by coming here and spreading misinformation / confusion along with disrupting the discussion. You are repeatedly being disingenuous, manipulative and dishonest. It's ridiculous to keep needing to refute outright lies and manipulations from you. If you don't have any useful information or ideas to contribute, why are you here?

As I understand @thestinger have mentioned implementing something like fuse, that I guess would not be so easy to implement, if possible at all.

I mentioned that it's possible to write a thin wrapper around SAF for the file access APIs representing it as virtual file paths. It's hardly impossible or even particularly hard to implement. A command for requesting SAF access and then exposing it via a virtual path is a straightforward concept. Have you missed the discussion about using ptrace to intercept exec and other file system access calls to change how they behave? It could also be done without ptrace, as it would be fine if it only worked for programs using libc rather than doing their own system calls like Go. It's not at all part of what needs to be implemented to work around this though. Termux is barely impacted by the storage changes since it has a nice storage provider already, as I mentioned. Doing more is a completely optional frill that never needs to be implemented and should really be discussed elsewhere if there's even interest in that.

I'm quite amused by you trying to question my competence for giving CSP as a well known example of defence in depth though.

You are still missing the point, let`s spell it out to you: you first need to make sure your web app is secure against main threats not against threats that are rarely used - that was main point. Of course there are lots of ways to attack and lots of ways to defend, but if your main defences are down there is no difference you do some kind systems defence or not.

It is good development practices to keep backward compatibility and not to remove features only due to security.
That's not an attitude compatible with making systems with decent privacy and security.

That is absolutely attitude you should take - if you have nothing to do security for (no users) there is no difference in security measures you take.

Software development involves balancing competing concerns and making design choices that achieve good compromises between them. The competing concerns are evolving and not something static.

Yes of course, but when you make security more important then functionality you loose functionality, you loose users.

No, I'm not misrepresenting facts. You're trying to project what you're repeatedly doing onto me is something that other people are able to easily see through. It's you who brought up these other topics in your poorly informed, inaccurate rants about other topics.

Yes you are. The part you are commenting about was only about how you sacrifice functionality of app due to security, to be precise exec data directory. SAF was mentioned in absolutely different context (as other losses due to Google change of mind). And to my best knowledge SAF in no way changes exec permission on Android Q, SELinux does that.

If you don't want to be called out on your bullshit, don't do it in the first place.

You calling this bullshit - telling millions of users - no your app cant work because some "security expert" decided - no you cant use some functions because he thinks it is more secure. You know how many of users that apps I work on had complains about security / privacy - 0 ? Now image what will happen when app goes down because critical functionality is denied due to security.

So before calling something bullshit first at least make some project that has not 10 users, sell it and then try to tell those users that it will not work anymore just because of yea you know - security.

This whole discussion is should about how bad it is to deny exec in data directory and proper workarounds (not VM or ptrace) if any.

You are repeatedly being disingenuous, manipulative and dishonest.

You calling me dishonest, really? when you outright lie, just for example one of your lies on sandbox is allowed by policy, when it is clearly stated that only VM is allowed:

It's allowed as long as it's in a meaningful sandbox.

If you want to discuss more about this please grow up and go back to main topic, otherwise at least be civilized while commenting.

I'll mark future comments (and some of the previous comments) that are about philosophical topics (in my opinion) rather than about the android Q security enhancement and what it means for termux as spam.

Is this a potential use case for Dynamic Delivery using App Bundles?

Android App Bundles seem to have an issue where they force extractNativeLibs to be false.

You can force extractNativeLibs=true in an Android App Bundle, by placing android.bundle.enableUncompressedNativeLibs=false into gradle.properties.

With a 500 Mb max, you might still be able to list multiple groups of packages as separate listings (yielding multiple packages below 500 Mb total). However, this would still require another solution for distribution outside Google Play

For what it is worth, we updated UserLAnd a while back to include proot, busybox and everything else needed for getting launched into a rootfs as libraries within the APK. I can confirm that this method made us Q compatible and our last several releases have targeted the latest SDK.
This method should then definitely work for Termux, but it will have some pluses and minuses. The main negative is that there will be a performance hit.

commented

Samsung to Dex is dead.
https://www.xda-developers.com/samsung-ends-linux-on-dex-beta-android-10-update/amp/
We have only termux can use. I am sadly to heard the android 10 of exec issue.
I use termux for java development. OMG

There is some new info from Google's Android Developer Summit.

https://www.youtube.com/watch?v=JgpCrYkC1OM

Here, Google is discussing changes in the storage API, the negative feedback some of it got, and what they are doing moving forward.

I was surprised to see the bullet point

  • Support for native libraries and file paths

Screen Shot 2019-10-24 at 10 38 28 AM

There are many more changes besides just that, some possibly related.

Screen Shot 2019-10-24 at 10 41 11 AM

Some points of particular interest to this issue, for the next Android release

Enable file paths and native libraries for media
Enforcement by target SDK

What does native libraries for media mean? They did not expand on that.

Nothing is said for execution restrictions, outside of the bullet point, and a brief mention that "many developers have the need for C libraries".

There is still much ambiguity. But could this be a signal that Termux can target new versions of Android?

Also possibly related is a permission for unrestricted file system access.

We're also adding special app access for apps which can truly demonstrate a need for broad access to shared storage. There are a limited number of use cases that we believe can demonstrate the need. A couple examples are file manager and back-up and restore apps. Only apps that can prove this need will be given access to it.

Screen Shot 2019-10-24 at 11 01 22 AM

Only apps that can prove this need will be given access to it.

I really wonder how they want to handle this in practice - hopefully not in the same way iOS App Store QA forces developers to change their apps in arbitrary ways to get them approved.

It could also be interpreted as media access via native libraries (with the NDK), in which case this doesn't change anything

https://android-review.googlesource.com/c/platform/bionic/+/954688

From June 10. Could be what was alluded to.

Add android_run_executable()

android_run_executable() works similar to execv(), but with an
important difference that it preserves existing memory mappings.
commented

Only apps that can prove this need will be given access to it.

Thank you Google for making my decisions for me. I hate having to the make the effort to decide things for myself.

For anyone following this issue, note that there has been some relevant discussion on gitter in the last two weeks:
https://gitter.im/termux/dev

See also:
https://wiki.termux.com/wiki/Dev:Development_Sessions
https://github.com/termux/termux-packages/wiki/Termux-and-Android-10/071b6335ac530601b1d0b321ebb8d1cb23491a71
https://github.com/termux/termux-packages/wiki/Termux-and-Android-10/51dfe08a7e54351ea11c2994a6bc10b492bd818f

By the way, did you consider and rule out the suggestion by @n0n3m4 and @thestinger to use isolatedProcess or was it just lost in the comment thread? It's the method used by Chrome to run web code in a sandbox and Firefox Preview is moving to it as well. And it seems to be compatible with both SELinux and the Google Play policy.

Why not fork() Termux and then load proot as a shared library, this way even if the app library folder becomes noexec, it should still work? (Unless fork() is blocked.)

As far as I understand the issue is not with loading proot (which we can bundle with termux) but with running dynamic code downloaded via apt, pip, gem, cpan etc. or compiled from source. Running dynamic code makes it impossible to audit apps for malware, so its use is disallowed via Google Play policy and increasingly hardening SELinux rulesets. As @thestinger noted earlier, there are still a few loopholes in SELinux in Q, but on the one hand they are still against Google Play policies, on the other hand they will eventually be closed and then we are back to square one. But there is one legal way to run untrusted code: put it in a restricted sandbox by starting a service with the isolatedProcess property set in the manifest. (We still need some extra code – perhaps based on proot – to set up the environment in this sandbox and communicate with it.)

There was a discussion on gitter starting at https://gitter.im/termux/dev?at=5dbfc4cd3d669b28a03014f2 mostly involving @fornwall, @xeffyr and @RalfWerner. They started with 3 possible options and mostly converged on using apks or dynamic feature modules for package installation. They are also trying to find some auxiliary method (qemu, proot or whatever) for user-compiled code. I was missing the mention of isolatedProcess / isolated_app from this chat, that's why I was asking if it has been previously excluded (somewhere in the long scrollback in the chat) or it was just simply forgotten (because of the long scrollback here).

Usage isolatedProcess will make things a lot more complicated. If I understand correctly (from docs: this service will run under a special process that is isolated from the rest of the system and has no permissions of its own. The only communication with it is through the Service API), things that will run under such "isolation" will not have any permissions, so any networking & storage access will be disabled. Perhaps isolated process even running with different user id (if so, there will be problems with accessing files due to ownership issues).

@bndeff I entered the Q-Discussion here to use my development in higher Android versions and saved the example with full environment. You wrote above:

  • We still need some extra code – perhaps based on proot – to set up the environment in this sandbox and communicate with it

Could you describe your idea there with practical steps?
Leonid is skeptical as to whether this solution has any advantages. I miss the experience with all suggestions and would test them as far as possible.

Maybe, I didn't checked its code.

However, utility ps showing that sandboxed processes have random user ids:

u:r:isolated_app:s0:c512,c768  u0_i127       3735  5512 1757388 147852 SyS_epoll_wait      0 S com.android.chrome:sandboxed
u:r:isolated_app:s0:c512,c768  u0_i128       3764  5512 1836464 181892 SyS_epoll_wait      0 S com.android.chrome:sandboxed
u:r:untrusted_app:s0:c512,c768 u0_a91       29580  4727 1948400 199000 SyS_epoll_wait      0 S com.android.chrome
u:r:isolated_app:s0:c512,c768  u0_i124      29793  5512 1800012  48016 SyS_epoll_wait      0 S com.android.chrome:sandboxed
u:r:untrusted_app:s0:c512,c768 u0_a91       29840  4727 1870776 106160 SyS_epoll_wait      0 S com.android.chrome:privileged_process0
u:r:isolated_app:s0:c512,c768  u0_i125      29913  5512 1734300  38952 SyS_epoll_wait      0 S com.android.chrome:sandboxed

And there really may be problems with file system access and networking:

herolte:/ $ cat /proc/29913/status
Name:	dboxed_process1
State:	S (sleeping)
Tgid:	29913
Ngid:	0
Pid:	29913
PPid:	5512
TracerPid:	0
Uid:	99125	99125	99125	99125
Gid:	99125	99125	99125	99125
FDSize:	128
Groups:	

- supplementary groups are empty.

You can't have internet access, etc. directly from there. It has to be proxied to a broker process outside of it.

I've done few tests with isolatedProcess and:

  • socket(AF_INET) cannot be used there (socket() fails with EACCES)
  • socket(AF_INET) file descriptor cannot be passed there from app through Binder
    • I haven't tested unix socket but I suspect it'll behave same
  • isolatedProcess cannot map files from /data/data as PROT_READ|PROT_EXEC
    • it can map them as PROT_READ
    • they currently can be mapped inside untrusted_app (that is, where Termux currently runs)
      • This probably will go away in future, current implementation bans execve() from data but still allows dlopen()ing (System.load in Java) downloaded files
      • This is why using proot with targetSdkVersion 29 still works while direct execve() doesn't
      • I'd prefer building thin termux-exec based wrapper with just ELF loader instead of using full proot all the time
        • Maybe this is just me having too low perception of stability of proot
        • I'm pretty sure proot nests badly
        • Doing so is technically still against Google Play policy (that is proot and custom ELF loader running in untrusted_app are unsandboxed downloaded code)
  • What Chromium does is mapping anonymous executable memory
    • That currently works both in app and isolated process
    • Presumably in future it will be possible only from isolated process

instead of full-fledged proot would be nice

I'm still thinking about using full-fledged proot instead of https://github.com/termux/termux-packages/wiki/Termux-and-Android-10#solution.

I already experimented with bundling stuff in app and the biggest problem here is environment prefixing, as executables in app lib dir have random path. A lot of packages use calls like execve("@TERMUX_PREFIX@/bin/sh", ....) which will be broken. And now we have to either review all packages and patch them (that will be a proper solution) to use dynamic prefix (e.g. by looking $PREFIX variable - I have sample patches for that) or just by using proot, which may not be a reliable solution for post-Q Android versions but simplify work a lot.


EDIT:
termux-exec should be able to deal with prefixing /bin, but we will have to enforce its use even for devices where it causes issues.

Problem with *.so files mixed with package data (e.g. in python and ruby) is not solvable with termux-exec.

@xeffyr do you have once looked my example - there are three questions to "check now"?

Is it possible to have an option to not waste performance on proot and just running Termux with root privileges if my device has it?

Possible, calling su or tsu normally will escape proot. However you won't be able to install anything because apt now has lock against running under root (unless you will su back to Termux app uid).

This link seem to not working well.
I got some issue when putting binary to shared native library path after packed executable binary into APK

AndroidManifest.xml

android:extractNativeLibs="true"

Gradle

    splits {
        abi {
            enable true
            reset()
            include 'x86', 'x86_64', 'arm64-v8a', 'armeabi-v7a'
            universalApk true
        }
    }

/src/resources/lib/arm64-v8a
/src/resources/lib/armeabi-v7a
/src/resources/lib/x86
/src/resources/lib/x86_64

Folder executable binary including some `lib`, `config` folders/files
/perl/bin/perl

Throw exception after generating APK and build into devices

Installation did not succeed.
The application could not be installed: INSTALL_FAILED_NO_MATCHING_ABIS
Installation failed due to: 'null'
Retry

May this issue related with temux app, any help?

I'd prefer building thin termux-exec based wrapper with just ELF loader instead of using full proot all the time
Can you show the part of proot code that implements elf loading?

About proot, libc and dlopen. Is it possible for proot to intercept mmap(PROT_EXEC) and replacing it to mmap(PROT_READ); mmap(PROT_WRITE|PROT_EXEC, MAP_ANONYMOUS); memcpy(); unmap()? This will let us using PROT_EXEC but will increase memory usage.

commented

Sorry for interrupting - looks like you're working hard to find an answer for this terrible situation and you finally got somewhere, but...

Think broader, guys.

It's not about Android at all. Google is trying to pretend, it's about security, but (like @esminis pointed out) most security concerns come from Java/Kotlin and if something will be problematic in the future, it's not problematic yet.
When I first discovered it, I legitimately wanted to write a petition to Google, as Android will always remain Linux and blocking Linux core functionality is straight-up ridiculous. But the thing is,

Android may not remain Linux anymore.

Centimeters before writing that petition, I thought about that video: Google Fuchsia OS could change the world...but.... Two key things:

It is based on a microkernel called Zircon, which compared to the Linux based Android...

Do you get it now? Android is Linux-based and Fuchsia is Zircon-based. Google wants to kill the concept of a Linux-based system. Yes, sure, they don't want to kill their Linux-based SYSTEMS themselves ("FuchsiaOS will most likely co-exist with ChromeOS and Android"), but break free of thinking, that Google always builds their systems on Linux.

...And if it's not Linux, some core functionalities are going to be missing - e. g. cd, dir, pwd, and all the other shell commands (provided Termux don't ship with them, which I'm pretty sure it does not. Correct me, if I'm wrong.).

Google want to DITCH Linux. The reason, why they allowed Linux apps on ChromeOS? Because they wanted SOME apps available (we all know, Chrome apps suck and Android apps aren't THAT good for a desktop). With FuchsiaOS, they most likely won't do that, as they ensured they'll get devs onboard by providing an API wayyyyyy before an actual open beta. But why would they? Isn't it easier to build JUST A YET ANOTHER LINUX (???distribution???) and call it a day? NO!!! They want to get rid of a terminal environment, which is holding them back. Apple did that with iOS. Google can, too! (They reason, why they don't want terminal emulators).

...has massive scalability.

We live in interesting times when you can't compile to x86, test on any Intel CPU, and call it a day. It's not the 90s. Yes, sure, you can instead compile for x86 in case of desktop apps and ARM in case of mobile apps, but IT'S NOT ENOUGH. New options are popping like a freaking popcorn EVERYWHERE! All of a sudden, AMD dug out of their grave and caused a massive disturbance in the desktop market. RISC-V, in which Google showed their interest, released its first OS-capable chipset under the SiFive brand. WebAssembly might be something, that Java always wanted to be - an app compiled onto any platform and any architecture, so it will run on anything with a web browser installed.

This is a HUGE roadblock for a fully multiplatform, scalable system. Thus, they want to get rid of architectural differences. The reason, why they don't want exec() function - it may create some. If exec() -ed files aren't compiled for one or another platform, an app is dead. But if CPP files are stored in the app directory and compiled alongside it, they have full control over platforms for which your app is compiled. And - because of how crazily multi-platform is Fuchsia - they are going to force the devs to compile to any platform in existence.

Final words:

I'm 10000000% sure, you all heard about Fuchsia. It's a big deal for the Android developers' community, so it's hard to miss, especially for folks like you. But so far, it looks like no one realized how it's connected with this entire SELinux-is-dead situation. And again, sorry for interrupting and getting so off-topic.

@GuzioMG Google has a long history of creating hype and destroying things. Take Stadia for example. Anyways, just not working on something we have while speculating about a thing we don't even have a clue about isn't a great idea. The developers and the community are trying their level best to make Android great and abandoning it thinking of an OS that we might get is impractical and offtopic for this issue.

FreeBSD and macOS do not use Linux or GNU, yet they still have the POSIX shell (cd, pwd) and POSIX utilities (ls, rm). Before Windows 10 added WSL, Cygwin and MSYS provided POSIX-inspired operating environments based on GNU that ran on Windows. Even MS-DOS had a port of much of GNU through the DJGPP project. But what all these have in common is that users can write a file to a directory and then make the file executable.

The problem is ultimately whether the OS publisher wants to block owners of a consumer electronics device from building and running software on a device as a safety measure against polymorphic or metamorphic malware. Whether a POSIX-inspired environment can be made to run efficiently on Android 10, Fuchsia, or any other mobile platform depends mostly on how strong its policy against metamorphic code is.

from building and running software on a device as a safety measure against polymorphic or metamorphic malware

The goal of these changes is to work towards enforcing w^x across the board to improve app security. You're misunderstanding the reasoning behind it and what it provides. It's not there to crack down on what malicious apps can do but rather to protect apps from exploitation and to make post-exploitation more difficult. Disallowing data from being executed is part of the basics of security. They really should have done this from the start and then come up with a way for apps like Termux to obtain permission to deviate from it, but Android started out with a wild west, anything goes approach and has had to make painful changes over time instead.

The end goal would be to forbid generating and running native code in memory (execmem), via tmpfs / ashmem or via storage (app_data_file execute). It remains to be seen how quickly they'll be able to work towards that, especially since ART has a JIT compiler.

There's not going to be anything blocking building and running code on a device but you need to be aware that the vehicle for that is an apk, i.e. bundle the libraries / executables into an apk and run them from there. You can use shared uid apks where there's a central one with the base code and then others providing more libraries / executables. These changes aren't going to prevent this, only doing it without going through an apk, and it's still quite a while before they'll come close to completing this, especially with blockers like the ART JIT.

The goal of these changes is to work towards enforcing w^x across the board to improve app security.

Absolutely not - if you see taking away features as improving security then you should be using old button phone it is surely much more secure.

The biggest reason for this change I see is for Android apps to be more platform agnostic and to allow Google easier migration to Fuschia (this applies to other recent changes from Google like SAF).

The issue is that all you see is "security" and you don`t work at developing something useful only strip features. Proper security should not affect features at all.

Of course it would be more secure if you strip features, but in this way you will go to an app that does absolutely nothing. It does not matter how secure you will make your OS, app, ... when you hardware has exploits, back-doors or your user is uneducated and can be exploited using social engineering.

I think this all discussion is off-topic, so please don`t start it again. This is the place on how the issue should be solved in termux.

taking away features as improving security then you should be using old button phone

Feature isn't taken away. It just enforced to be in right place, i.e. binaries must be supplied within APK file and not sideloaded from the Internet.

The biggest reason for this change I see is for Android apps to be more platform agnostic

In such case whole Linux API should be blacklisted and disallowed for use in any way. But that didn't happened.

The issue is that all you see is "security" and you don`t work at developing something useful only strip features.

No, it isn't. execve() syscall is not part of Android SDK and is not needed for typical Android apps at all (native code expected to be used through JNI).

Just assume that it is available only for certain OS functionality: init system, adb, etc...

It does not matter how secure you will make your OS, app

your user is uneducated and can be exploited using social engineering

It does matter when it comes to applications working with "untrusted data" such as web pages, messages and media files.

RCE possibility makes it possible to download a malicious payload and execute it with privileges of current application. When data execution is disabled, RCE is still not closed but payload execution is either prevented or made more difficult.

how the issue should be solved in termux

--> #1427

binaries must be supplied within APK file and not sideloaded from the Internet.

s/sideloaded from the Internet/built by user from the source he can access and modify/
FTFY.

No, it isn't. execve() syscall is not part of Android SDK and is not needed for typical Android apps at all (native code expected to be used through JNI).

It's not execve() itself, but popen() and system(), both are available in NDK and aren't even officially advised against (they are also a part of publically available NDK, not private (system) one).

I'm glad to see that this issue gets resolved (it seems like users won't be able to build code at some point though, ugly build-APK-run-hacks don't count), but this OS change sucks for sure, I don't think this should even be discussed. W^X is about RAM, not about persistent memory.

As sharedUserId was mentioned in this thread, I should mention that it is also deprecated.
Yay, security, not yet another feature removal for sure.

Deprecation of shared user id is perhaps another problem to consider. If it really going to be removed, then our in-APK packaging solution probably not going to be long term 🤣.

taking away features as improving security then you should be using old button phone

Feature isn't taken away. It just enforced to be in right place, i.e. binaries must be supplied within APK file and not sideloaded from the Internet.

That is a feature: ability to execute code that was not part of APK or expansion file.

The issue is well seen in apps that have lots of downloadable code, and user only uses very small part. Of course you can pack everything into single APK, but this might make APK size multiple gigabytes, or to multiple APKs and you will have release hell with hundreds of APKs.

The biggest reason for this change I see is for Android apps to be more platform agnostic

In such case whole Linux API should be blacklisted and disallowed for use in any way. But that didn't happened.

It is being greylisted since Android 9 - so yes they are gradually doing it.

The issue is that all you see is "security" and you don`t work at developing something useful only strip features.

No, it isn't. execve() syscall is not part of Android SDK and is not needed for typical Android apps at all (native code expected to be used through JNI).

But Runtime.getRuntime().exec() is part of Android SDK and always was (but not for long as it will be disabled also)

Just assume that it is available only for certain OS functionality: init system, adb, etc...

It does not matter how secure you will make your OS, app

your user is uneducated and can be exploited using social engineering

It does matter when it comes to applications working with "untrusted data" such as web pages, messages and media files.
RCE possibility makes it possible to download a malicious payload and execute it with privileges of current application. When data execution is disabled, RCE is still not closed but payload execution is either prevented or made more difficult.

So maybe Java reflection API should be disabled as it can also be used as same RCE attack vector.
Just in general disable Java apps as it is easy to slip something to any app code that will never be noticed by Google bots, but all sensitive data will be compromised.

It is easy to disable all features that made Android better than iOS - file system access, code download compilation and/or execution on device, call recording, ... . The more features become disabled the more users will run away to iOS, MacOS, ... .

This change and future changes in this area have nothing to do with portability beyond Linux or Fuchsia. I was part of the early discussions and followed the development of the SELinux policy and the code reviews for it. This was implemented by their security team as a security feature after numerous failed attempts at landing it and a failed attempt at making a v2 sandbox.

Absolutely not - if you see taking away features as improving security then you should be using old button phone it is surely much more secure.

It's forcing a different approach. Claiming execve is being removed is a misunderstanding of the notices about it. It's becoming more restricted and you can't assume that you can run anything but an executable appropriately bundled with your apk. It should not be assumed that any of the system executables can be run, and it won't be possible to run executables from writable app data. The system executables like those that are part of the system's toybox are not part of the public API and access to them can be removed at any point. It isn't yet enforced for native libraries, but it should be expected that the same restriction will be applied. It wasn't done all at the same time to make this less disrupted.

I developed a full set of changes to disallow every way of performing dynamic native code execution as part of my security work on GrapheneOS (including under previous names of the project), and I landed assorted bits and pieces of this upstream. The proposals to disallow execmem, app_data_file execute (libraries / mapping code executable) and app_data_file execute_no_trans (executables) contained a way for apps to opt-out of the security feature if they had an incompatibility with it. However, many apps would have missed out on the security improvements from providing w^x and wouldn't have made a difficult migration to other mechanisms. I wouldn't have taken the approach that they did rather than providing a way for apps to exclude themselves from this hardening, since the feature is designed to protect apps, rather than securing the OS from apps.

A major part of them not offering the proposed exception is that their policies for the Google Play Store already forbid this and they consider it malicious behavior to be breaking the rule. That's why they didn't bother with the exception system that was included in every downstream proposal for the features like this. Maybe they'll reconsider once they move on to forbidding mapping app_data_file as executable, etc. since it becomes more disruptive especially since that's currently a workaround for not having the ability to run executables directly (i.e. you can still run them indirectly). Removing or restricting execmem is a much harder problem and difficult for their security team to push for due to the ART JIT.

The biggest reason for this change I see is for Android apps to be more platform agnostic and to allow Google easier migration to Fuschia (this applies to other recent changes from Google like SAF).

They can still bundle native libraries and executables in the apk via the supported approach and run those, so it doesn't provide any kind of portability. It doesn't make sense and even if it did, it's simply a fact that it wasn't part of the reasoning behind it.

The issue is that all you see is "security" and you don`t work at developing something useful only strip features. Proper security should not affect features at all.

I develop lots of useful software and these personal attacks are completely uncalled for and incredibly rude. A huge part of security is attack surface reduction. Improving security for apps also certainly requires changing the approaches that they're using to be more secure. In this case, that means switching to using the system package manager via apks.

Of course it would be more secure if you strip features, but in this way you will go to an app that does absolutely nothing. It does not matter how secure you will make your OS, app, ... when you hardware has exploits, back-doors or your user is uneducated and can be exploited using social engineering.

You may be a security nihilist, but a lot of people are working on making security better, which is a long and difficult process involving lots of compromises and sacrifices. It's always a balance, and not everyone is going to agree with the balance that is reached between security and performance, security and memory usage, security and features, etc. Some people will want it taken further and others will want it relaxed. The purpose of these kinds of changes is hardening apps against post-exploitation and eliminating classes of vulnerabilities. It wasn't done to improve the security of the OS to protect it from apps. It's to protect apps themselves from attackers. They decided to force app developers to comply with it to have it make a major difference rather than providing an opt-out, where apps that could have easily moved to other mechanisms wouldn't have bothered. Unfortunately, Termux is in a situation where migrating to something else is a much more difficult problem, but that's not the case for the vast majority of apps. Very few apps will struggle to cope with this. Termux happens to be special and would have been a clear use case for opting out, which wasn't implemented.

I think this all discussion is off-topic, so please don`t start it again. This is the place on how the issue should be solved in termux.

You can't just try to get the last word on an issue by making baseless claims, throwing around personal attacks and then telling other people not to talk on it because you consider it off-topic. You've continued to talk about it yourself.

@n0n3m4

It's not execve() itself, but popen() and system(), both are available in NDK and aren't even officially advised against (they are also a part of publically available NDK, not private (system) one).

The execve system call is certainly a public API and there is no indication or announcement that it is being removed. What is being removed is the ability to execute files that are potentially writable (i.e. app_data_file) or system executables that are not supposed to be part of a public or stable API. It's going to remain possible to bundle an executable with an apk and execute it.

W^X is about RAM, not about persistent memory.

Mapping the same portion of a file as both executable and writable results in a w^x violation inside the address space of a process. Even if you believe that w^x only applies to memory, fully enforcing it for memory rewrites that files can only be mapped as either writable or executable never both. That means that in order to fully enforce w^x, the OS needs to enforce that files which can be mapped as executable must not be writable by the process. Executing an executable results in portions of it being mapped executable too. It doesn't make much sense to claim it only applies to memory because there is no way of directly running native code without it being in memory and being able to write to a file allows it to be mapped as writable in memory.

As sharedUserId was mentioned in this thread, I should mention that it is also deprecated.
Yay, security, not yet another feature removal for sure.

There is no mention of security in the description for why that API is deprecated. This is what it actually says:

Shared user IDs cause non-deterministic behavior within the package manager. As such, its use is strongly discouraged and may be removed in a future version of Android. Instead, apps should use proper communication mechanisms, such as services and content providers, to facilitate interoperability between shared components.

They state that it may be removed, and it's quite possible there will be more alternatives to it by that point if it ever happens. It sounds like they want package management to be reproducible and something about the way this is designed or implemented is counter to that.

@esminis

So maybe Java reflection API should be disabled as it can also be used as same RCE attack vector.

Disabling dynamic Java code execution by default is definitely a planned feature (not reflection as a whole): GrapheneOS/os-issue-tracker#33. It involves restricting class loading, etc. to non-writable labels, just like how the changes work for native code execution. Previously, for the past incarnation of the project: GrapheneOS-Archive/legacy_bugtracker#374. It was planned before 2016 though. I don't have a full history of the discussions, etc. around it. Our approach is different though, and there will be a way to make exceptions, just as we still have planned for native code execution. The SELinux policy change talked about in this thread is only one small part of a bigger picture and the change implemented upstream is far from complete even just for native code. They made assorted other changes tied to this work too, some of which I landed upstream, but those didn't cause any disruptions for you. I never proposed making any disruptive changes without having an opt-out from it. The hidepid=2 changes were similar: I proposed landing it with a permission for apps to gain an exception, but they decided to just remove it and have apps use the OS APIs for retrieving app / process statistics, etc. with the existing usage stats special permission.

The way we approached these features always involved providing a way for applications like Termux to opt-out of the hardening feature. The changes are designed to protect apps from exploitation, not protecting the OS from apps. Google initially deployed this change with the targetSandboxVersion="2" attribute along with further hardening such as removing app_data_file execute (mapping as executable) too, and the removal of access to a lot of base system executables like the shell. It had incredibly poor adoption and they ended up giving up on that opt-in path and moved to taking the approach of making the changes more forcefully bit-by-bit as part of newer API levels. As I said, it's not the approach that I would have taken, but I can understand why they feel that they have to force this on developers to stop them from just opting out and leaving it that way indefinitely instead of adapting to it.

You can look at what was changed for targetSandboxVersion="2" to see other things that are planned. The SELinux policy for it was dropped in the latest OS versions but you can look back at when it was actually fully implemented, before they chose a different approach than opting in to this. I think the way it should have worked is fully implementing all of it ASAP and providing an opt-out rather than slowly making these changes bit-by-bit with no proper plan for fully deploying it and lots of disruption along the way. Not up to me though, I don't work for them or have much say in anything beyond making suggestions.

the OS needs to enforce that files which can be mapped as executable must not be writable by the process

Nope, it doesn't, but I have to admit that this argument is good enough to be accepted by some average programmer. I'm an information security professional myself (not on the defense side though) and I know that W^X is primarily designed to handle various cases of memory access related vulnerabilities: stack / heap overflows and other stuff like this. Writing to file requires an ability to ROP or run shellcode (unless we talk about pure file-related logical vulnerabilities, that are much easier to detect), basically RCE. If somebody have RCE on device, the damage is done even if it is "protected" by disallowing /system/bin/... binaries to run, for example (hackers are scared for sure).
This "executables must come from a signed source blah blah blah security and stuff" is nothing but trusted computing, not W^X in its original meaning.

There is no mention of security in the description for why that API is deprecated

Well, it was sarcasm. I said "Yay, security" because every single feature Google breaks recently is done to make Android "more secure" whether it is POSIX breakage (Scoped Storage), yet another permission removal (Call / SMS permissions fiasco), inability of user himself to choose some files / folders even via SAF (awesome Android 11 feature) or the problem we are currently discussing.

Disabling dynamic Java code execution by default is definitely a planned feature (not reflection as a whole)

I was talking about reflection not dynamic code execution.

For example, let`s take https://abc.com/plain.txt file:

classA methodA parameter0
classB methodB parameter1

Then we load it from web as simple plain text file.Then we parse it in java, then we simply call wanted classes, methods, ...:

Stream.of("classA methodA parameter0", "classB methodB parameter1")
  .map { it.split(" ") }
  .forEach { Class.forName(it[0]).getDeclaredMethod(it[1]).invoke(it[2]) }

That is remote execution and none of those measures will help unless reflection is disabled.

Nope, it doesn't, but I have to admit that this argument is good enough to be accepted by some average programmer.

Mapping a file as both writable and executable is a violation of in-memory w^x. It means there's a writable and executable mapping for the same data, and writing to the writable mapping will change the executable code. Enforcing w^x via SELinux or PaX is incomplete without addressing more than anonymous mappings. You're implying that I'm being misleading or dishonest when that's exactly what you're doing by totally ignoring what I said, not countering it, and yet claiming that you've done that. What is your response to what I've said about this? You just disregarded it and went on to make inaccurate claims without actually addressing what I said.

It's also certainly relevant for persistent storage. Just as an attacker can use an arbitrary memory write vulnerability to overwrite code in memory without w^x, they can do the same with an arbitrary storage write vulnerability to overwrite code on storage. Forbidding executing writable data on persistent storage (i.e. enforcing w^x for persistent storage) is a very standard security policy. Any decent system administrator knows about this and it has been the industry standard for years. It's one of the basic steps to securing good old fashioned LAMP servers: making sure that the web server runs as a user which cannot modify CGI files. It's broadly applicable and is a broadly deployed security mechanism to mitigate file write vulnerabilities. Similarly to the above, what's your response to this? Do you really think that w^x policy doesn't also apply to the executable and writable permissions on persistent storage? It certainly mitigates vulnerabilities and is a widely adopted / recognized approach to security. It's also same a policy of forbidding w|x and is still accurately referred to as w^x especially since is part of what's required to enforce it in memory.

This "executables must come from a signed source blah blah blah security and stuff" is nothing but trusted computing, not W^X in its original meaning.

It's also an important part of mitigating attacker persistence, so it's part of the concept of verified boot and code signing, but that's totally separate from the w^x concept.

'm an information security professional myself (not on the defense side though)

So maybe you shouldn't be so callously spreading misinformation and making a response to my comment where you disregard what I say, misrepresent it and pretend to counter it while you actually don't even bother to quote more than a trivial part of my response and treat it as if I never said the rest of it. What is that if not being deliberately misleading?

and I know that W^X is primarily designed to handle various cases of memory access related vulnerabilities: stack / heap overflows and other stuff like this

That's one aspect of it, and enforcing it for non-anonymous mappings is clearly part of that as I explained before you wrote this comment. It's also not the only aspect of it. Enforcing w^x policy for storage such as the example of running a Apache as a user unable to write to the CGI files is a very traditional concept too.

Writing to file requires an ability to ROP or run shellcode (unless we talk about pure file-related logical vulnerabilities, that are much easier to detect), basically RCE.

No, it doesn't require that. As I mentioned above, files get mapped in memory, so a memory write vulnerability can directly write to a file. File write vulnerabilities also exist, where the attacker is able to replace certain (or any) files that the application has write access to. For example, an application may unsafely unzip a file, or may not correctly sanitize / verify a path provided by an attacker, which can let the attacker direct the application to write to files that it should not be overwriting with untrusted data. For example, they can overwrite a library or executable that has been stored in app data and is executed / mapped as executable from there. It's very easy to see that what you're saying is inaccurate, and I don't understand why you're saying it in response to a comment that already laid this all out.

the damage is done even if it is "protected" by disallowing /system/bin/... binaries to run, for example (hackers are scared for sure).
This "executables must come from a signed source blah blah blah security and stuff" is nothing but trusted computing, not W^X in its original meaning.

That's not at all what I've been talking about, but rather protecting applications from vulnerabilities where an attacker is able to overwrite executable code using a write vulnerability, whether it's a memory write vulnerability (which CAN directly or indirectly write to files WITHOUT obtaining control over execution with ROP, etc.) or a file write vulnerability.

whether it is POSIX breakage (Scoped Storage)

I don't know what disallowing broad access to shared storage has to do with POSIX breakage. Worth noting that POSIX doesn't support file access revocation and Linux doesn't provide it as an extension yet. It's not possible to provide a fine-grained access control system with revocation using POSIX file access without abstracting it. Every time any fine-grained permission was revoked, any app with the potential to have access would need to be completely killed and restarted. There's still POSIX file access... just not to files shared with other applications since users and other apps are supposed to have proper control over that.

Android has built the security model with SELinux and higher-level access control checks for a long time. It relied on a very limited approach based on POSIX file permissions and users / groups in the early days and it wasn't possible to provide a decent sandbox or protect against mistakes like an app using overly broad permissions and getting compromised by another.

inability of user himself to choose some files / folders even via SAF (awesome Android 11 feature)

https://developer.android.com/preview/privacy/storage#all-files-access

Do you really think that w^x policy doesn't also apply to the executable and writable permissions on persistent storage?

Maybe it's called different, I don't really care. For now, I'd just link Wikipedia that doesn't mention word "file" on the whole page. To avoid misunderstanding, I consider non-anonymous memory mappings to be a subject to W^X as well. But not the file itself.

As I mentioned above, files get mapped in memory, so a memory write vulnerability can directly write to a file

File being mapped as RWX in memory doesn't imply that writing to this memory will change file contents (if file is opened for read only, Copy-on-write will happen instead).

So maybe you shouldn't be so callously spreading misinformation and making a response to my comment where you disregard what I say, misrepresent it and pretend to counter it while you actually don't even bother to quote more than a trivial part of my response and treat it as if I never said the rest of it. What is that if not being deliberately misleading?

I'm not so good at reading long posts, the initial discussion was about file writes and W^X and Wikipedia is pretty clear on this.

https://developer.android.com/preview/privacy/storage#all-files-access

Note: Apps that are granted this permission still cannot access the app-specific directories that belong to other apps. These directories appear as subdirectories of Android/data/ on a storage volume.

Oops

This discussion starts to become off-topic (especially provided the size of some messages), so I'm leaving for now.

@esminis

How do you know that - as this was most likely not low level developer or security team decision. I doubt that any of developers will be told ultimate goal of all those changes.

It was a security team decision which they pushed for and which was not overruled. Many of the proposed / planned changes have definitely been overruled due to what other teams wanted, and a lot of these things have been delayed for years or crippled, such as the change being discussed here, which is an incomplete deployment of the overall feature. The overall feature can't really be deployed anymore due to the introduction of the ART JIT. The security team doesn't get their way on everything and has to fight for the changes they want to make.

I'm in regular contact with them and I actively followed this feature so I'm well aware of why it was made, the downstream implementations and external proposals to AOSP which kept it on their radar, etc.

Am I understanding right - you were part of team that decided to implement this - so essentially you are defending your own "good" decision.

No, I implemented it downstream for GrapheneOS (under previous names of the project) years ago and it had an exception system. Various portions of the feature were landed upstream but our proposal to restrict execmem and app_data_file execute/execute_no_trans had an exception system for apps to opt-out, as I mentioned, and so did other external proposals.

we need to implement proper permission to allow user selecting himself if he wants to download and execute external code, hm but it will take so much work testing, all ui, java, c, ... parts, nah just do a simple 10 line change and it will be surely more secure.

As I've said, these changes are primarily about protecting apps from exploitation (mitigating vulnerabilities like a file write vulnerability by stopping them from being able to gain code execution) and post-exploitation (blocking persistence by an attacker that has successfully exploited the app and is able to write to storage). It's not about protecting the OS from applications. It's also only a small part of the overall feature for native code. Many parts of this have already landed in the past and just weren't disruptive to Termux like this part was. The feature isn't fully landed yet and there are related features beyond native code, once dynamic native code is prevented, and it's unclear how they can actually achieve for the time being due to the ART JIT. They've still been working towards it and also restricting JIT usage within the base OS like system_server to remove execmem where they can already.

If you think this was personal attack then it would be nice to see that useful software to prove your point, that is not modifying other open-source software project code or creating 10 line app (joking ;) I just read quite some history, reddit, ... so I already know the answer) . From what I understand you only do security research and are out of touch with real users.

I do a lot more than security research and engineering. You're once again just engaging in personal attacks, which you started without any provocation. If you're too triggered by having a technical discussion to avoid making misrepresentations, dishonest claims and trying to attack me as a person or my experience there is clearly a lack of merit behind your position.

You maybe feature nihilist but a lot of people are working on making apps that actually do something.

As I've said repeatedly, my approach to these features always had an opt-out for apps where it would be very difficult to adapt to it. The vast majority of apps using writable / dynamic executable code are doing it completely unnecessarily and can easily migrate away from it. It's rare for it to be core to the function of the app as it is for Termux, and in cases where it is, they're usually not taking the same care to perform signature verification, etc. Termux itself is verifying package signatures but has no defense against attacker persistence. I can understand why Google took a very pragmatic approach where they assumed that app developers would just opt-out rather than migrating away from what they were doing, if possible, or improving the security with signature checks including after the initial download / installation of the code to prevent attacker persistence. It's very difficult to try to herd developers into addressing a widespread systemic issue.

There's an endless stream of trivial file write vulnerabilities in apps leading to code execution bugs due to apps choosing to execute code from app data. That's the main thing that this change is attempting to directly address, along with moving towards full native code w^x enforcement, since it takes away one more way of dynamically generating / modifying native code in-memory and via persistent storage. It's NOT a complete implementation of the feature. It's easy to change the SELinux policy to make the feature complete, but that would destroy app compatibility. When we make these changes in GrapheneOS, we provide toggles / exception systems. For example, we do fully forbid dynamic native code generation already, but at the moment it only applies to apps shipped within the OS and isn't applied to third party apps where it will cause compatibility issues. A way for apps to opt-out of it and users to deal with apps that have not adapted to it has to be provided first.

In your case most likely yes, as proper resolution of security issues would require more work and knowledge than most such security researches have.

Attack surface reduction is not an improper way of addressing security issues. It's one of the best ways to do it. Once again, you're just trying to attack other people's competence and motivations instead of making real arguments.

So you can, yea? That is so so rude :)

It's ridiculous for someone who started making personal attacks and who is acting in a way that's incredibly dishonest to say that others are being rude. Address your own behavior first, and remember that you began making unprovoked personal attacks and misrepresentations because you can't argue based on the facts.

Issue is that you only want to improve security but simply don`t care about real users and what they are using their devices for.

You're not even trying to read and respond to what was actually said rather than inventing strawman argument and making personal attacks. I've been writing responses that are technical and based on facts and your response is just to misrepresent and ignore what I've said and claim that I've done or said things which I haven't along with making personal attacks on my experience / motivations and the same for others.

It really says a lot when that's the approach that you choose to take to someone spending a bunch of time explaining the technical reasons behind the changes, what exactly was changed, why it was changed and how that fits into the bigger picture and where things are going in the future. You can disagree with the direction and decisions being made without stooping to spreading misinformation, making personal attacks and acting dishonestly and unethically. Believe it or not but it's quite possible for people to hold different opinions on a topic and yet have a discussion on it without making personal attacks, misrepresenting the positions of the other side and lying.

That was meant to stop from further discussion that you started again although it was shut down earlier previous year.

I didn't start the discussion again, and it's you that started making unprovoked personal attacks.

Maybe just stop spreading misinformation and attacking people if you don't want to be challenged on it.

@n0n3m4

Maybe it's called different, I don't really care. For now, I'd just link Wikipedia that doesn't mention word "file" on the whole page. To avoid misunderstanding, I consider non-anonymous memory mappings to be a subject to W^X as well. But not the file itself.

Wikipedia is a collaboratively edited encyclopedia, not a good place to obtain particularly detailed or accurate information on security topics.

You can clearly see that even for in-memory w^x, it needs to be enforced that an executable mapping of a file and a writable mapping of the same content need to be prevented to enforce w^x. Similarly to how rw -> rx transitions are forbidden as part of strictly enforcing in-memory w^x, those file mappings also don't need to exist at the same point in time to be a violation. If an attacker can write to the memory and then later on that memory gets executed, that's more than enough.

The meaning of w^x is that write permission is exclusive from execute permission. This is not something specific to memory. It applies to files as well, and is directly connected to in-memory enforcement of the rule since files get mapped into memory. Strict w^x doesn't just involve avoiding violations in practice but also enforcing that they cannot happen, which prevents an attacker from ever gaining enough control to inject dynamic native code, rather than just raising the bar. It doesn't prevent them from ever gaining remote code execution since they don't need to provide their own native code to do that, but it does prevent exploiting many kinds of vulnerabilities (especially when it's enforced for persistent storage to counter file write vulnerabilities) and makes exploitation / post-exploitation more difficult and potentially infeasible in practice when combined with other mitigations.

File being mapped as RWX in memory doesn't imply that writing to this memory will change file contents (if file is opened for read only, Copy-on-write will happen instead).

I never said that it did imply it, and I'm unclear on how your response is relevant. Mapping anything as rwx is disallowed by disallowing execmem. Mapping a file as both writable and executable, whether at the same time or at separate points in time, is not prevented by features like execmem or PaX MPROTECT. File write vulnerabilities also exist, not just memory write vulnerabilities, and w^x is relevant in the same way for those on persistent storage as it is in-memory. In fact, it's generally more thorough and harder to bypass via persistent storage, since ways to hijack or bend control flow via data in memory are pervasive while data in storage only has higher-level issues rather than low-level architectural ones that need to be addressed through other features like CFI.

I'm not so good at reading long posts, the initial discussion was about file writes and W^X and Wikipedia is pretty clear on this.

Write permission being exclusive from execution permission (w^x) is certainly relevant to files and I'd suggest using better than sources than Wikipedia for learning about technical topics. Wikipedia is a collaboratively edited encyclopedia summarizing what books, news articles, etc. have said on the topics. It's largely not written by experts and using primary sources is not allowed. Myself or anyone else can go edit the page to make it less misleading and incomplete... or they could make it even worse. I can't understand why you're trying to use Wikipedia as an authoritative source especially about a technical security topic.

Mitigating file write vulnerabilities and persistence with a w^x policy for files is a standard security topic. All that w^x means is write excluding execute, as opposed to being able to do both (w|x) and strict w^x involves the OS forbidding it for improved security rather than applications just avoiding it in practice since that can go wrong due to bugs or can be altered as part of a exploitation.

I can't understand why you're trying to use Wikipedia as an authoritative source especially about a technical security topic.

Maybe because it is linked directly in the merge request that brought all of this on us, huh?

This was the last off-topic comment here, I promise.

@esminis

That is remote execution and none of those measures will help unless reflection is disabled.

You're misunderstanding the threat model and the mitigation approach. These security features are not about protecting the OS from applications. It wouldn't make much sense to have the application request permission from the user to do it. It's not giving the application access to any additional data. The goal of these features is not to outright prevent any kind of remote code execution from existing. It's obviously still always going to be possible to implement an interpreter either intentionally or accidentally which provides higher level dynamic code execution via the existing code. This form of SELinux policy changes leading to disallowing dynamic native code is being done done to protect apps from common classes of vulnerabilities and exploitation / post-exploitation / persistence techniques.

They could have allowed a way to apps to opt-out of the security feature and it wouldn't have been necessary for users to authorize it as that doesn't make much sense. As I've said, providing a way to make exceptions is how we approached this in GrapheneOS and what we along with various academic researchers proposed upstream. Google decided they wanted to force developers to comply with best practices rather than providing an opt-out from the stricter security policy. Also, as I've mentioned, this is one piece of a bigger puzzle, and various incremental restrictions have landed over the years relevant to this which you haven't noticed. There is also a long way to go before it's complete for native code (and they have no clear path to completing it like iOS due to ART), and it can be extended beyond native code to other common sources of vulnerabilities and weaknesses. I'm sure the fact that iOS enforces a full variant of the feature for all native code played into their decision making.

Maybe because it is linked directly in the merge request that brought all of this on us, huh?

It's linked a useful source of supplementary information, not an authoritative source on the topic. The Wikipedia article now has a bit of coverage on how W^X is also relevant to the write/execute permissions for files in addition to memory. You can't expect much technical depth from an encyclopedia article largely written by non-experts largely forbidden from relying solely on primary sources. Phrack articles could be used as good references on this topic though.

Locked temporarily to hopefully discourage off-topic discussions