That feels like something the database could do for itself - checking whether a range of mmap()ed blocks are actually in memory or not is a single syscall.
I guess for large indexes the overhead of walking the page tables is going to be large though, so it’s not necessarily going to be a net win.
But you don't know which blocks you'll need at planning time, so you can't really check that.
You could of course check if the total database size is within RAM, but it's much more common to have database much larger than RAM (say 1TB on a machine with 128GB of RAM), but the actual working set (recent data processed by queries) is much smaller.
I guess one could implement a job which samples the pages of tables and indexes to check what percentage is typically in RAM and use that for an estimate, but to get which actual pages will be hit by a query you need to start executing it (you need to do an index lookup to see which index pages and table pages which will be accessed) which would defeat the purpose of query planning.
Granted, but (AFAICT) in this case the database was doing a full table scan because it didn’t think the index was in memory. Checking to see whether the index itself is already loaded seems like something the query planner in principle ought to be able to do efficiently. (Obviously the existing codebase might make it difficult.)
I guess for large indexes the overhead of walking the page tables is going to be large though, so it’s not necessarily going to be a net win.