File dealing with in server-side JavaScript


const fs = require('fs');

const filename="binary.bin";

fs.readFile(filename, (err, information) => {
  if (err) {
    console.error('Error studying file:', err);
    return;
  }
  console.log(information);

  // course of the Buffer information utilizing Buffer strategies (e.g., slice, copy)
});

Streaming information in JavaScript

One other aspect of coping with information is streaming in chunks of knowledge, which turns into a necessity when coping with massive information. Right here’s a contrived instance of writing out in streaming chunks:


const fs = require('fs');

const filename="large_file.txt"; 
const chunkSize = 1024 * 1024; // (1)
const content material="That is some content material to be written in chunks."; // (2)
const fileSizeLimit = 5 * 1024 * 1024; // // (3)

let writtenBytes = 0; // (4)

const writeStream = fs.createWriteStream(filename, { highWaterMark: chunkSize }); // (5)

operate writeChunk() { // (6)
  const chunk = content material.repeat(Math.ceil(chunkSize / content material.size)); // (7)

  if (writtenBytes + chunk.size  fileSizeLimit) {
      console.error('File dimension restrict reached');
      writeStream.finish(); 
      return;
    }
    console.log(`Wrote chunk of dimension: ${chunk.size}, Complete written: ${writtenBytes}`);
  }
}

writeStream.on('error', (err) => { // (10)
  console.error('Error writing file:', err);
});

writeStream.on('end', () => { // (10)
  console.log('Completed writing file');
});

writeChunk(); 

Streaming offers you extra energy, however you’ll discover it entails extra work. The work you might be doing is in setting chunk sizes after which responding to occasions based mostly on the chunks. That is the essence of avoiding placing an excessive amount of of an enormous file into reminiscence directly. As an alternative, you break it into chunks and take care of each. Listed below are my notes in regards to the fascinating elements of the above write instance:

  1. We specify a bit dimension in kilobytes. On this case, now we have a 1MB chunk, which is how a lot content material might be written at a time.
  2. Right here’s some faux content material to write down.
  3. Now, we create a file-size restrict, on this case, 5MB.
  4. This variable tracks what number of bytes we’ve written (so we will cease writing after 5MB).
  5. We create the precise writeStream object. The highWaterMark ingredient tells it how large the chunks are that it’s going to settle for.
  6. The writeChunk() operate is recursive. Each time a bit must be dealt with, it calls itself. It does this until the file restrict has been reached, wherein case it exits.
  7. Right here, we’re simply repeating the pattern textual content till it reaches the 1MB dimension.
  8. Right here’s the fascinating half. If the file dimension is just not exceeded, then we name writeStream.write(chunk):
    1. writeStream.write(chunk) returns false if the buffer dimension is exceeded. Meaning we will’t match extra within the buffer given the scale restrict.
    2. When the buffer is exceeded, the drain occasion happens, dealt with by the primary handler, which we outline right here with writeStream.as soon as('drain', writeChunk);. Discover that this can be a recursive callback to writeChunk.
  9. This retains monitor of how a lot we’ve written.
  10. This handles the case the place we’re achieved writing and finishes the stream author with writeStream.finish();.
  11. This demonstrates including occasion handlers for error and end.

And to learn it again off the disk, we will use the same method:

Leave a Reply

Your email address will not be published. Required fields are marked *