how to hash very large files
We have deprecated the old std/hash code in favor of the crypto module, but I do not know what the standard approach to hashing something piecemeal looks like. The advice around the crypto digest generally looks like this:
this works fine on smaller files, but I need to occasionally hash a big file, like 2GB files. This is a lot of data to hold in memory, and if I could leverage readable streams that would be ideal. If I need to do that, should I simply feed the
this works fine on smaller files, but I need to occasionally hash a big file, like 2GB files. This is a lot of data to hold in memory, and if I could leverage readable streams that would be ideal. If I need to do that, should I simply feed the
hash_buffer back into itself, concatenating new data as I go?