With FlexSearch
or lunr or similar, building an index is so fast for “thousands of items” that it’s fine to do it when the user opens a search interface and forget it once they’re done.
Any non in-memory search indices? Was hoping to find one that used indexeddb to implement fuzzy search. Maybe that’s just not as performant which is why I haven’t found a popular library yet.
You do not want to use IndexedDB, you should avoid IndexedDB if you can make live work without it.
IndexedDB rows need to be very coarse grained, each operation with the IndexedDB API has very high overhead compared to something like walking a LSM tree in RocksDB or something. If the index does fit in memory, then the best move would be to store and load the entire index from a single or a few IndexedDB rows.
If you need to index a lot of documents in a way that doesn't fit in memory, I recommend you consider sqlite3, stored in either OPFS or on IndexedDB, and use sqlite's FTS5 full text search stuff for the index. There are several sqlite backends available (see https://github.com/rhashimoto/wa-sqlite/tree/master/src/exam...), something like their IDB VFS will store each block in the sqlite file as a row in IndexedDB essentially batching storage of the index rows into IDB rows for you (for more details read https://github.com/rhashimoto/wa-sqlite/blob/3c202615ed6f54e...)
Many years ago, when I was building Lunr, it was initially based on IndexedDB, but it was _much_ slower and the datasets I had in mind easily fit in memory so I didn’t pursue it. No idea if that has changed since though.
For typo resistance and stemming it’s useful. Mostly it depends if your search matching logic needs to do allocation per document, like if you need to concatenate 10 strings together to get the complete search text and then split that into some kind of stem array, doing that O(thousands) of times per keystroke can be laggy in JS especially on memory constrained devices because GC pressure. Better to do it once up front. Then voila, you’ve got an index.