Drill
Sorry for abandoning the project for so long but I was busy with life.
Finally decided to drop D as a programming language (cmon guys it's dead) and moved to dotnet7
BUT I NEED YOUR HELP
I need people skilled in making a good UI, Avalonia, WinForms... I am not a UI gal.
THANK YOU 🙇
Requirements
Using the terminal
Windows / Chocolatey
choco install dotnet-7.0-sdk
Mac
brew install dotnet@7
How to run directly from source code
dotnet run --configuration Release --project CLI "search string"
Create portable folder
dotnet publish Drill.sln --maxCpuCount --configuration Release --self-contained --output DrillPortable
Todo
- Thread safety
- Sign executables
- Configs/arguments for search
- Set root folder / mountpoints
- Clean docs
- automatically generate docs
- EXTENSIONS!!!
- PKGBUILD
- Windows Installer?
- ncurses?
- Flatpak
- Threadpool or something similar
- Snap
- UI for Windows
- exe icon
- UI For Mac
- .app
- .dmg
- UI for Linux
- .AppImage
- Regex lists
- Docker
- Shadow CI
- Telegram Bot for new releases?
- Release executables automatically
- Pull request checker
- Heuristics
Get notified for latest releases
What is this
I was stressed on Linux because I couldn't find the files I needed, file searchers based on system indexing (updatedb) are prone to breaking and hard to configure for the average user, so did an all nighter and started this.
Drill is a modern file searcher for Linux that tries to fix the old problem of slow searching and indexing. Nowadays even some SSDs are used for storage and every PC has nearly a minimum of 8GB of RAM and quad-core; knowing this it's time to design a future-proof file searcher that doesn't care about weak systems and uses the full multithreaded power in a clever way to find your files in the fastest possible way.
-
Heuristics: The first change was the algorithm, a lot of file searchers use depth-first algorithms, this is a very stupid choice and everyone that implemented it is a moron, why? You see, normal humans don't create nested folders too much and you will probably get lost inside "black hole folders" or artificial archives (created by software); a breadth-first algorithm that scans your hard disks by depth has a higher chance to find the files you need. Second change is excluding some obvious folders while crawling like
Windows
andnode_modules
, the average user doesn't care about .dlls and all the system files, and generally even devs too don't care, and if you need to find a system file you already know what you are doing and you should not use a UI tool. -
Clever multithreading: The second change is clever multithreading, I've never seen a file searcher that starts a thread per disk and it's 2019. The limitation for file searchers is 99% of the time just the disk speed, not the CPU or RAM, then why everyone just scans the disks sequentially????
-
Use your goddamn RAM: The third change is caching everything, I don't care about your RAM, I will use even 8GB of your RAM if this provides me a faster way to find your files, unused RAM is wasted RAM, even truer the more time passes.
Donate
monero:8B5UK4znA6h67sfRK1eCjdUEry8BKseAF1qmKAVhAF5u1zeWiNFfgW9VaARLFh5VZKUQJC346K7wpH7aT17v62DC9igXw3y
Contributors
Code Contributors
This project exists thanks to all the people who contribute. [Contribute].
Financial Contributors
Become a financial contributor and help us sustain our community. [Contribute]
Individuals
Organizations
Support this project with your organization. Your logo will show up here with a link to your website. [Contribute]