summaryrefslogtreecommitdiff
path: root/src/choices.c
Commit message (Collapse)AuthorAge
* Simplify input_delimiter handlingJohn Hawthorn2019-08-16
|
* Add ability to use null as input delimiter.Ashkan Kiani2019-08-16
| | | | | Update tty to print newline as space Add tty_putc
* choices: Fix a typo ("stings")Jonathan Neuschäfer2018-06-17
|
* Add -j option to control parallelismJohn Hawthorn2017-01-31
|
* Pass options to choices_initJohn Hawthorn2017-01-31
|
* Merge partially sorted lists in parallelJohn Hawthorn2017-01-26
|
* Replace k-way-merge with 2-way mergeJohn Hawthorn2017-01-26
|
* Perform sort in parallelJohn Hawthorn2017-01-26
|
* Fix memory leak of jobJohn Hawthorn2017-01-26
|
* Improve parallelism of search workersJohn Hawthorn2017-01-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously the list of candidates was split between threads a priori, with each thread being evenly distributed a contiguous range from the search candidates. This did a bad job of distributing the work evenly. There are likely to be areas with significantly more matches than others (ex. files within directories which match the search terms), as well as areas with longer strings than others (ex. deep directories). Because of the type of data fzy receives, work allocation needs to be dynamic. This commit changes the workers to operate on the candidates in batches, until they have all been processed. Batches are allocated by locking a mutex and grabbing the next available range of BATCH_SIZE candidates. BATCH_SIZE is currently set at 512, which worked best on my laptop in a quick test. This will always be a compromise. Small batch sizes will distribute the work more evenly, but larger batch sizes will be friendlier to CPU caches. Quick testing: Before: ./fzy -e drivers --benchmark < linux_files.txt 1.69s user 0.03s system 163% cpu 1.053 total After: ./fzy -e drivers --benchmark < linux_files.txt 2.12s user 0.02s system 296% cpu 0.721 total
* Store choices on job structJohn Hawthorn2017-01-08
|
* Create search_job structJohn Hawthorn2017-01-08
|
* Remove unused and uninitialized worker struct varJohn Hawthorn2017-01-08
|
* Use score_t instead of doubleJohn Hawthorn2016-07-10
|
* Use number of processors as worker countJohn Hawthorn2016-06-22
| | | | | | | Since we're dividing the search set equally between processors, we want to run with the same number of workers that we have CPU execution threads. This avoids having a worker which is starved until the end of execution.
* Store worker_count on choices_tJohn Hawthorn2016-06-22
|
* Use threading when matching/scoringJohn Hawthorn2016-06-22
|
* Use threading when matching/scoringJohn Hawthorn2016-06-22
|
* Skip sorting on empty search stringJohn Hawthorn2016-06-08
| | | | | | | | | For the empty query, sorting can be the slowest part of the search. Since the empty query gives no scores, and we've now made our sort stable, in this case we can simply skip sorting. A sort of 250000 entries took about ~10ms on my laptop, which is not a huge amount. However it's not 0 and is free to skip.
* Make sorting stableJohn Hawthorn2016-06-08
| | | | | C's stdlib's qsort isn't a stable sort. We want it to be so that any equivalent matches are stored in the order they came in.
* Move sources into src directoryJohn Hawthorn2016-05-21