Age | Commit message (Collapse) | Author |
|
|
|
Previously, string vectors were built by reading input line-by line, and
multiple copies of string vectors were made when searching.
Now, input is read into one big buffer, and string vectors only contain
references to the strings in this buffer. This both speeds up reading of
input, and avoids unnecessary copying of strings in various places.
The main downside currently is that input read from stdin is no longer
UTF-8 normalised. This means, for example, that a search for `e` won't
necessarily match `é`. Normalisation is very slow relative to the rest
of tofi, however, and not needed for most use-cases. This could either
be solved by accepting the slowdown, or making this an option, such as
--unicode or --unicode-normalize.
|
|
This adds `_sorted` to the names of `string_vec_find` and
`desktop_vec_find`, to make it clear that these functions require their
input to already be sorted.
Also fix a potential bug from not sorting the list of desktops in drun
mode.
|
|
All text handling should now be explicitly UTF-8 or UTF-32, removing the
ambiguity around wchar_t and related functions.
|
|
This should allow case-insensitive matching for non-Latin characters,
and fix matching for characters with diacritics.
|
|
|
|
This enables some simple fuzzy matching logic for searches.
|
|
There's starting to be a fair amount of duplicated code between the drun
and normal run modes. At some point in the future, it's likely to be
worth combining them, such that there's only a single
`struct search_item` or similar, and just handle the different modes at
the start and end of execution.
|
|
This is a pretty simple implementation, but it should work for most
use cases. Notably, generic application names aren't used (though that
could be added without too much hassle), and neither are keywords (that
would be more difficult).
|