Call me old-fashioned, but I’d like to keep control over my own data. In the digital domain, this means swimming against the modern trend of glossy apps that trap your files in proprietary formats. It also means dealing with Android, which locks your data in app’s folder you cannot access because you’ve been denied full user rights on your own smartphone.
To overcome these, I developed TeXeTeScribe — a note taking app for Android that wants you to have full control over your data and the ability to reliably transfer your notes between all platforms. This app makes your notes stay readable everywhere — phone, PC, TV, fridge, spaceship — because plain text doesn’t care about operating systems. The app even decodes those ancient ANSI or CP866 encoding formats that other editors turn into hieroglyphs.
Then, TeXeTeScribe utilizes microSD in an effort to save your data from smartphone hardware failure or a smartphone vendor who pushes an update that crashes the OS. Because that sends all data in the phone’s internal storage to electronic Valhalla. Since Android 6, internal storage has been encrypted, so if any core part of the phone — OS, flash memory, or processor — fails, you can say goodbye to your data.
So, the app turned out to be fulfilling its promises since day one, and already saved my notes a couple of times when my phone caught a bootloop due to a firmware crash.
The only thing that bugged me was my old Samsung phone, whose Exynos processor struggled to launch the app fast enough with a collection of several hundred jotted down thoughts and notes.
Since I’m not in a position to get new hardware, I had to make the software work fast enough on the existing dinosaur-dated smartphone.
Android is a great OS, if all you need from a smartphone is doomscrolling. But it is a terrible OS if you want your data to be accessible. That’s because Android forces apps to store user data in folder that user can’t access. Unless you take special approach, there is no methods to backup your data stored in a specific app.
Get slightly more rights on your smartphone that a default Android gives you. In plain words — get root access. But Google and phone vendors killed this. Google blocks Android functionality when it detects the phone is rooted.
Phone vendors simply block you from even trying to get root access by eliminating the possibility to unlock the bootloader (its a small program that launches main OS and without unlocking bootloader only OS signed by vendor can be launched). More over, while some phone vendors are direct and just take away bootloader unlocking, others claim you can unlock bootloader, but you have to fit within such constraints that you become a monkey who tries to jump for a banana that hangs ten meters above.

For example, Xiaomi (and also its Redmi and Poco subbrands), makes its phone users to wait several days and write posts on its forum only to become eligible for asking for a permission to unlock a bootloader. Then users have to be very lucky to get approval because Xiaomi provides a daily quota of only couple of thousands of permissions for the whole planet, forcing people to click a button at exactly 23:59:59 Beijing timezone.
After you are granted the permission to unlock the bootloader and unlock it, Google will do its best to make you regret it. Immediately NFC payment apps such as Google Pay or Wallet stop working. After finding a hackaway to install Magisk to get root access, Google may simply stop sending your messages in Google Messages, as the company did in 2024. Lately, Google announced it disables AI on the phones with unlocked bootloader.
If you are brave enough and did all of that, you’ll get a root access in your phone. Meaning you’ll be finally able to copy the folder of the note taking app you use.
That folder will have a SQLite database that holds your notes. Get a database editor app, learn how to use it, and voila — you can freely access your notes!
But why bother with a SQLite database file, which no popular reader can open, for storing lines of simple text when you can utilize the plain text format. It is used since the 1970s and readable everywhere — phone, PC, TV, fridge, spaceship — virtually any device with a processor. Except Android, somehow: two decades of development, billions in R&D, and the platform still can’t natively handle the most primitive data format computers ever invented. Android, despite it is almost 20 years of age, still has no built-in tool to open and edit plain text files.
TXT, JSON, HTML, CSV, XML, YAML, INI, MD — all these files are plain text, and, unlike SQL database file, can be easily viewed on Android, iOS, Windows, Linux, macOS, ChromeOS, Smart TVs, e-readers, gaming consoles, embedded and IoT devices, cloud storage and collaboration tools.
Personally, I was combining approaches for a decade, until Google made this impossible. Since Android 4, my notes were stored in TXT files on microSD card, and that saved my notes countless times when Android crashed due to buggy update, phone crashed due to gravity, phones changed to newer ones. I simply ejected the microSD card and reinserted it into a new phone with all my notes in place.
All was good until my Samsung phone upgraded to Android 8. That version blocked apps from using microSD, and the note taking app I was using got no update. Obviously, storing critical data in phone’s internal memory is not suitable. It is only a question of time when the phone catches OS failure, rendering all data lost for good. Plus, there access to internal phone storage is obtained via buggy MTP, which several times destroyed data while copying it to computer’s hard drive.
Luckily, Samsung allowed to get root access at that time (not anymore now). And Android 8 was still supported by Xposed (the best tool of its time to maximize root access). Xposed allowed to install XInternalSD — a small app to make Android consider microSD as internal memory.
I continued to enjoy writing notes in TXT in my favorite app. Until phone updated again, breaking Xposed. Luckily, I still had remnants of root access, so I could edit files in /data/data/ folder where Android apps keep their data and settings. Fix was straightforward: I navigated to app’s folder and manually typed in the path to microSD. And that worked, the folder with my notes became accessible!

But at that time the app I was using started to lack some important functionality:
So, I decided to make my own app that offered missing features and didn’t require root access for use with microSD. Fortunately, despite phone vendors actively removing microSD slots, finding a phone with microSD support is still possible.
The app demonstrated good performance in test environment, instantly creating a sorted listing of TXT files with their content preview. However, when pushed to production — a folder with hundreds of TXT files — the app turned to be painfully slow on first launch. But after some optimizations the app became 2.35 times faster.

The first thing we noticed was that the app became unresponsive immediately after launch if a folder had already been selected in a previous session. App wasn’t “crashing”, but for several seconds nothing moved. This happened because file enumeration, sorting, adapter creation happened right inside onCreate() and onResume().
The key suspect was this call:
displayTxtFiles(selectedFolderUri);
triggered directly on startup. The app froze here because displayTxtFiles() did everything synchronously on the main thread.
Earlier builds did everything directly in onCreate() and early lifecycle calls:
That meant the UI thread was waiting for file I/O (SAF calls through DocumentFile) and CPU work (encoding detection, string decoding, sorting) before Android could even finish rendering the first frame.
The fix: pushing everything heavy to background threads. The fix created a multi-threaded load pipeline using:
backgroundExecutor = Executors.newFixedThreadPool(8);
and a main-thread handler:
private Handler mainHandler = new Handler(Looper.getMainLooper());
All file operations now happen in background threads, while only the UI updates run on the main thread.
The pipeline looks like this:
Step A: displayTxtFiles() launches a background task
backgroundExecutor.execute(() -> {
// Heavy I/O happens here
});
Nothing inside this block touches UI widgets — it only collects metadata.
Step B: After files are listed and sorted, UI is updated on the main thread
mainHandler.post(() -> updateFileListUI(fileList));
This call is asynchronous. It schedules a lightweight update of the adapter and immediately returns, so the UI stays responsive.
Step C: Previews load after the list appears
Earlier, the preview text for each file (first 250 characters) was read before showing anything. Now the app first displays the list with “Loading…” placeholder and then spawns parallel preview tasks:
loadFilePreviewsParallel(fileList, txtFiles);
That method splits the list into batches:
final int BATCH_SIZE = 10;
and loads previews asynchronously per batch, again via backgroundExecutor.execute().
Each batch updates the visible list later via:
mainHandler.post(() -> fileCardAdapter.notifyDataSetChanged());
This gives the illusion of streaming content — the UI stays responsive to user input, and file content previews fill in gradually. The new version also switched from reading the first 5 lines to reading the first 250 characters, making preview extraction more predictable in terms of I/O cost.
Because now multiple threads operate at once, a few defensive flags were added:
private volatile boolean isLoading = false;
This prevents overlapping background loads if the user taps “Refresh” too quickly.
Also, backgroundExecutor is recreated in onResume() if it was shut down:
if (backgroundExecutor == null || backgroundExecutor.isShutdown()) {
backgroundExecutor = Executors.newFixedThreadPool(8);
}
That ensures resuming the app never hits a dead executor, which would otherwise stall background operations.
DocumentFile calls "cheaper"Having sorted out the UI responsiveness, the DocumentFile came up as the main hidden performance killer. It was because of how Android’s Storage Access Framework (SAF) works. On modern Android the SAF is an only method to access files outside of app’s private sandbox, especially if app has to access microSD card. For user SAF looks like a system-managed file picker.
The framework wraps every filesystem access in multiple permission and content resolver layers. In practice, each DocumentFile operation (like listFiles() or getUri()) goes through Inter-Process Communication (IPC) calls into the system’s media provider.
Iterating over hundreds of files means each property request blocks the thread until Android’s media provider responds. That’s tens or hundreds of milliseconds per batch. Multiply that by hundreds of files and you get milliseconds of overhead per call, turning file listing into a second-long stall.
The original code did something like this:
DocumentFile[] files = directory.listFiles();
for (DocumentFile file : files) {
if (file.getName().endsWith(".txt")) {
long lastModified = file.lastModified();
...
// read metadata, open stream, etc.
}
}
Every iteration triggered:
getName() → IPC roundtriplastModified() → IPC roundtripBy the time the loop finished several hundred files, the UI thread had already spent seconds waiting for Binder responses.
DocumentFile performance issues can’t be “fixed” — SAF by design trades speed over the ability to block apps’ file access. But performance issues can be contained by minimizing the number and frequency of SAF calls.
Here’s how it was done:
listFiles() call onlydirectory.listFiles() is still used, but only once per refresh. The result is immediately copied into a local ArrayList<DocumentFile> for further work:
DocumentFile[] files = directory.listFiles();
if (files != null) {
for (DocumentFile file : files) {
if (file.getName() != null && file.getName().toLowerCase().endsWith(".txt")) {
tempTxtFiles.add(file);
}
}
}
No further re-querying of the directory. The app keeps that snapshot for the entire UI session.
The key performance improvement was wrapping metadata reads (getName(), lastModified()) in parallel:
List<Future<FileMetadata>> futures = new ArrayList<>();
for (DocumentFile file : tempTxtFiles) {
futures.add(backgroundExecutor.submit(() ->
new FileMetadata(file.getName(), file.lastModified(), file)
));
}
Each thread fetches metadata for one file, distributing SAF latency across multiple cores. This alone scythed directory scans by a lot.
FileMetadata (a small internal class) holds:
String fileName;
long lastModified;
DocumentFile documentFile;
This prevents repeated calls like file.getName() later in the preview stage - those values are reused directly from memory.
The app no longer opens every file via DocumentFile during the initial scan. Instead, preview loading (which calls openInputStream(file.getUri())) is deferred to later batches:
loadFilePreviewsParallel(fileList, txtFiles);
Each batch processes at most 10 files at a time. This ensures ContentResolver.openInputStream() is never called in a burst that saturates the SAF service.
The next optimization attempt was to do something with the sorting step. While analyzing the file listing flow, the sorting step revealed an inefficiency. The original code sorted DocumentFile objects directly:
Collections.sort(txtFiles, (f1, f2) ->
Long.compare(f2.lastModified(), f1.lastModified()));
At first glance, that looks harmless. But under the hood:
f1.lastModified() calls ContentResolver.query() for f1.getUri().f2.lastModified() calls another query for f2.getUri().So, for 300 files, sorting could easily take up to 10 seconds of real time, entirely blocking the background thread.
Sorting time scaled superlinearly with the number of files because of random access patterns and cache misses inside the DocumentsProvider. Also, the comparator invoked getName() when sorting alphabetically (in alternative sort modes), and getName() also hits the provider. That doubled the pain.
The fix was to detach sorting from I/O entirely. Instead of sorting DocumentFile objects directly, the app first cached each file’s metadata (name, timestamp, URI) in a lightweight class:
class FileMetadata {
String name;
long lastModified;
DocumentFile documentFile;
FileMetadata(String name, long lastModified, DocumentFile documentFile) {
this.name = name;
this.lastModified = lastModified;
this.documentFile = documentFile;
}
}
This cache is built immediately after listing:
for (DocumentFile file : directory.listFiles()) {
if (file.isFile() && file.getName() != null) {
fileMetadataList.add(
new FileMetadata(file.getName(), file.lastModified(), file)
);
}
}
All SAF calls happen once per file, in a controlled batch, not inside sorting.
Now sorting operates purely in memory:
Collections.sort(fileMetadataList,
(f1, f2) -> Long.compare(f2.lastModified, f1.lastModified));
Here both f1.lastModified and f2.lastModified are primitive longs stored in RAM. This executes in microseconds, regardless of the number of files.
More importantly, the metadata itself is now gathered using parallel futures. Instead of sequential SAF calls that accumulate latency, multiple threads fetch getName() and lastModified() concurrently, distributing the IPC overhead across CPU cores.
Then, once sorted, the display or preview stage references metadata.documentFile only for visible items.
A. Pre-sorting at scan time
Instead of re-sorting every time the user reopens the app or returns to the file list, the cached metadata is reused unless the folder content actually changes (detected by comparing last scan time and number of files).
B. Optional sorting modes
Alphabetical sorting was switched to use cached name fields (already read once). This avoids getName() calls inside the comparator loop.
C. UI thread offload
The sorting now happens fully inside a background executor:
backgroundExecutor.execute(() -> {
Collections.sort(fileMetadataList, comparator);
runOnUiThread(() -> updateUI(fileMetadataList));
});
This ensures the main thread remains responsive even if sorting is triggered repeatedly.
After these steps, manual testing with folders containing several hundred .txt files showed that app has become much faster without losing any of its computation-heavy functionality (encoding detection, preview organization, sorting by last modified date).
The engineering lesson here isn’t “use threads” or “cache things”. It’s the order of thinking:
In this project, the optimization path followed a logical, reproducible pattern.
The TeXeTeScribe app is available from its Github page on EB43 Github, or from F-Droid.