Our file server is down at the moment. This has resulted in all the PDFs I had open in Sumatra being inaccessible to the application. And it appears the application has completely hung. Ideally, the application should still allow me to work with open files in a form of cached mode. That is, it should cache the files in RAM or swap and “ping” the source location for heartbeat until services are restored.
This “problem” should be opened as an Issue at Issues · sumatrapdfreader/sumatrapdf · GitHub
However, I would point out that there are only a few cases where SumatraPDF is designed to work with cache files
- files from the internet that are cached by the browser
- files that are converted via GhostScript into a temporary cache PDF
In all other cases the file is dynamically read (they can constantly be changing) that does mean, as you note, that if a networked location fails the application can hang, which is the root of several open issues.
In mission critical applications it is generally recommended to use a reliable means to replicate remote shared files locally (e.g. using sharepoint syncing or other local data cache), and SumatraPDF depends on the external file system to provide such features.
I’ll remember that in future. Since apparently this symptom is a shadow of other open issues caused by the same underlying root problem, I will forgo opening a support issue on GitHub unless you would like for me to do that.
I already knew that the program dynamically monitors the status of the underlying file for changes. I think that is a very useful feature as it “saves clicks” when the file is updated. It works well, in my experience. However, the implication that the use case I described is not “mission critical” is not true at all. Our file server is highly mission critical, though we have debated the idea of moving to SharePoint for easier distributed access. The situation I described does not invalidate the need to make sure the program behaves well during an outage of any kind. This symptom was essentially a crash, or in operating system terms a “BSOD.” While the program did not terminate, Windows clearly though it should and offered to terminate it for me a few times. This aspect of the program needs to be redesigned to be more efficient and resilient. My post here was an attempt to shed light on a real problem, not an imagined one, believing to be true that whoever created such a fast and efficient program would be of the same mindset that I myself am–that continuous improvement in every area of life is one of the great keys to life.
I am a moderator not the developer (who may not read all forum posts) but does read ALL issues reported on GitHub ( even if not replying to all.)
@kjk any observations as to potential pros/cons with cache full contents of all unknown quantities of open files into memory to guard against occasional brown-out?