It is used to return trailing bytes, if any left in the buffer.
Node.js StringDecoder Example
Let’s see a simple example of Node.js StringDecoder.
File: stringdecoder_example1.js
const StringDecoder = require('string_decoder').StringDecoder;
const decoder = new StringDecoder('utf8');
const buf1 = new Buffer('this is a test');
console.log(decoder.write(buf1));//prints: this is a test
const buf2 = new Buffer('7468697320697320612074c3a97374', 'hex');
console.log(decoder.write(buf2));//prints: this is a test
const buf3 = Buffer.from([0x62,0x75,0x66,0x66,0x65,0x72]);
console.log(decoder.write(buf3));//prints: buffer
The Node.js path module is used to handle and transform files paths. This module can be imported by using the following syntax:
Syntax:
var path = require ("path")
Node.js Path Methods
Let’s see the list of methods used in path module:
Index
Method
Description
1.
path.normalize(p)
It is used to normalize a string path, taking care of ‘..’ and ‘.’ parts.
2.
path.join([path1][, path2][, …])
It is used to join all arguments together and normalize the resulting path.
3.
path.resolve([from …], to)
It is used to resolve an absolute path.
4.
path.isabsolute(path)
It determines whether path is an absolute path. an absolute path will always resolve to the same location, regardless of the working directory.
5.
path.relative(from, to)
It is used to solve the relative path from “from” to “to”.
6.
path.dirname(p)
It return the directory name of a path. It is similar to the unix dirname command
7.
path.basename(p[, ext])
It returns the last portion of a path. It is similar to the Unix basename command.
8.
path.extname(p)
It returns the extension of the path, from the last ‘.’ to end of string in the last portion of the path. if there is no ‘.’ in the last portion of the path or the first character of it is ‘.’, then it returns an empty string.
9.
path.parse(pathstring)
It returns an object from a path string.
10.
path.format(pathobject)
It returns a path string from an object, the opposite of path.parse above.
In Node.js, file I/O is provided by simple wrappers around standard POSIX functions. Node File System (fs) module can be imported using following syntax:
Syntax:
var fs = require("fs")
Node.js FS Reading File
Every method in fs module has synchronous and asynchronous forms.
Asynchronous methods take a last parameter as completion function callback. Asynchronous method is preferred over synchronous method because it never blocks the program execution where as the synchronous method blocks.
Let’s take an example:
Create a text file named “input.txt” having the following content.
File: input.txt
Javatpoint is a one of the best online tutorial website to learn different technologies
in a very easy and efficient manner.
Let’s take an example to create a JavaScript file named “main.js” having the following code:
File: main.js
var fs = require("fs");
// Asynchronous read
fs.readFile('input.txt', function (err, data) {
if (err) {
return console.error(err);
}
console.log("Asynchronous read: " + data.toString());
});
// Synchronous read
var data = fs.readFileSync('input.txt');
console.log("Synchronous read: " + data.toString());
console.log("Program Ended");
Open Node.js command prompt and run the main.js:
node main.js
Node.js Open a file
Syntax:
Following is the syntax of the method to open a file in asynchronous mode:
fs.open(path, flags[, mode], callback)
Parameter explanation:
Following is the description of parameters used in the above syntax:
path: This is a string having file name including path.
flags: Flag specifies the behavior of the file to be opened. All possible values have been mentioned below.
mode: This sets the file mode (permission and sticky bits), but only if the file was created. It defaults to 0666, readable and writeable.
callback: This is the callback function which gets two arguments (err, fd).
Node.js Flags for Read/Write
Following is a list of flags for read/write operation:
Flag
Description
r
open file for reading. an exception occurs if the file does not exist.
r+
open file for reading and writing. an exception occurs if the file does not exist.
rs
open file for reading in synchronous mode.
rs+
open file for reading and writing, telling the os to open it synchronously. see notes for ‘rs’ about using this with caution.
w
open file for writing. the file is created (if it does not exist) or truncated (if it exists).
wx
like ‘w’ but fails if path exists.
w+
open file for reading and writing. the file is created (if it does not exist) or truncated (if it exists).
wx+
like ‘w+’ but fails if path exists.
a
open file for appending. the file is created if it does not exist.
ax
like ‘a’ but fails if path exists.
a+
open file for reading and appending. the file is created if it does not exist.
ax+
open file for reading and appending. the file is created if it does not exist.
Create a JavaScript file named “main.js” having the following code to open a file input.txt for reading and writing.
File: main.js
var fs = require("fs");
// Asynchronous - Opening File
console.log("Going to open file!");
fs.open('input.txt', 'r+', function(err, fd) {
if (err) {
return console.error(err);
}
console.log("File opened successfully!");
});
Open Node.js command prompt and run the main.js:
node main.js
Node.js File Information Method
Syntax:
Following is syntax of the method to get file information.
fs.stat(path, callback)
Parameter explanation:
Path: This is string having file name including path.
Callback: This is the callback function which gets two arguments (err, stats) where stats is an object of fs.Stats type.
Node.js fs.Stats class Methods
Method
Description
stats.isfile()
returns true if file type of a simple file.
stats.isdirectory()
returns true if file type of a directory.
stats.isblockdevice()
returns true if file type of a block device.
stats.ischaracterdevice()
returns true if file type of a character device.
stats.issymboliclink()
returns true if file type of a symbolic link.
stats.isfifo()
returns true if file type of a fifo.
stats.issocket()
returns true if file type of asocket.
Let’s take an example to create a JavaScript file named main.js having the following code:
File: main.js
var fs = require("fs");
console.log("Going to get file info!");
fs.stat('input.txt', function (err, stats) {
if (err) {
return console.error(err);
}
console.log(stats);
console.log("Got file info successfully!");
// Check file type
console.log("isFile ? " + stats.isFile());
console.log("isDirectory ? " + stats.isDirectory());
});
Now open the Node.js command prompt and run the main.js
Streams are the objects that facilitate you to read data from a source and write data to a destination. There are four types of streams in Node.js:
Readable: This stream is used for read operations.
Writable: This stream is used for write operations.
Duplex: This stream can be used for both read and write operations.
Transform: It is type of duplex stream where the output is computed according to input.
Each type of stream is an Event emitter instance and throws several events at different times. Following are some commonly used events:
Data:This event is fired when there is data available to read.
End:This event is fired when there is no more data available to read.
Error: This event is fired when there is any error receiving or writing data.
Finish:This event is fired when all data has been flushed to underlying system.
Node.js Reading from stream
Create a text file named input.txt having the following content:
Javatpoint is a one of the best online tutorial website to learn different technologies in a very easy and efficient manner.
Create a JavaScript file named main.js having the following code:
File: main.js
var fs = require("fs");
var data = '';
// Create a readable stream
var readerStream = fs.createReadStream('input.txt');
// Set the encoding to be utf8.
readerStream.setEncoding('UTF8');
// Handle stream events --> data, end, and error
readerStream.on('data', function(chunk) {
data += chunk;
});
readerStream.on('end',function(){
console.log(data);
});
readerStream.on('error', function(err){
console.log(err.stack);
});
console.log("Program Ended");
Now, open the Node.js command prompt and run the main.js
node main.js
Output:
Node.js Writing to stream
Create a JavaScript file named main.js having the following code:
File: main.js
var fs = require("fs");
var data = 'A Solution of all Technology';
// Create a writable stream
var writerStream = fs.createWriteStream('output.txt');
// Write the data to stream with encoding to be utf8
writerStream.write(data,'UTF8');
// Mark the end of file
writerStream.end();
// Handle stream events --> finish, and error
writerStream.on('finish', function() {
console.log("Write completed.");
});
writerStream.on('error', function(err){
console.log(err.stack);
});
console.log("Program Ended");
Now open the Node.js command prompt and run the main.js
node main.js
You will see the following result:
Now, you can see that a text file named “output.txt” is created where you had saved “input.txt” and “main.js” file. In my case, it is on desktop.
Open the “output.txt” and you will see the following content.
Node.js Piping Streams
Piping is a mechanism where output of one stream is used as input to another stream. There is no limit on piping operation.
Let’s take a piping example for reading from one file and writing it to another file.
File: main.js
var fs = require("fs");
// Create a readable stream
var readerStream = fs.createReadStream('input.txt');
// Create a writable stream
var writerStream = fs.createWriteStream('output.txt');
// Pipe the read and write operations
// read input.txt and write data to output.txt
readerStream.pipe(writerStream);
console.log("Program Ended");
Open the Node.js and run the mian.js
node main.js
Now, you can see that a text file named “output.txt” is created where you had saved ?main.js? file. In my case, it is on desktop.
Open the “output.txt” and you will see the following content.
Node.js Chaining Streams
Chaining stream is a mechanism of creating a chain of multiple stream operations by connecting output of one stream to another stream. It is generally used with piping operation.
Let’s take an example of piping and chaining to compress a file and then decompress the same file.
File: main.js
var fs = require("fs");
var zlib = require('zlib');
// Compress the file input.txt to input.txt.gz
fs.createReadStream('input.txt')
.pipe(zlib.createGzip())
.pipe(fs.createWriteStream('input.txt.gz'));
console.log("File Compressed.");
Open the Node.js command prompt and run main.js
node main.js
You will get the following result:
Now you will see that file “input.txt” is compressed and a new file is created named “input.txt.gz” in the current file.
To Decompress the same file: put the following code in the js file “main.js”
File: main.js
var fs = require("fs");
var zlib = require('zlib');
// Decompress the file input.txt.gz to input.txt
fs.createReadStream('input.txt.gz')
.pipe(zlib.createGunzip())
.pipe(fs.createWriteStream('input.txt'));
console.log("File Decompressed.");
AutoKeras refers to an AutoML system based on Keras. It is developed by DATA Lab at Texas A&M University. The purpose of AutoKeras is to make machine learning accessible for everyone. It provides high-level end-to-end APIs such as ImageClassifier or TextClassifier to solve machine learning problems in a few lines, as well as flexible building blocks to perform architecture search.
Keras comes packaged with TensorFlow 2 as tensorflow.keras. However, to start using Keras, simply install TensorFlow 2. Keras/TensorFlow is compatible with: Python 3.5–3.8 Ubuntu 16.04 or later Windows 7 or later macOS 10.12.6 (Sierra) or later.
Keras has built-in industry-strength support for multi-GPU training and distributed multi-worker training, via the tf.distribute API. However, if you have multiple GPUs on your machine, you can train your model on all of them by: Firstly, creating a tf.distribute.MirroredStrategy object. Secondly, creating and compiling your model inside the strategy’s scope. Lastly, calling fit() and evaluate() on a dataset as usual.
Keras Tuner is an easy-to-use, scalable hyperparameter optimization framework that solves the pain points of hyperparameter search. In this, you can easily configure your search space with a define-by-run syntax, then leverage one of the available search algorithms for finding the best hyperparameter values for your models. Further, Keras Tuner comes with Bayesian Optimization, Hyperband, and Random Search algorithms built-in, and is also designed to be easy for researchers to extend in order to experiment with new search algorithms.
Once your data is in the form of string/int/float NumpPy arrays, or a Dataset object (or Python generator) that yields batches of string/int/float tensors, it is time to preprocess the data. This can mean: Firstly, Tokenization of string data, followed by token indexing. Secondly, Feature normalization. Thirdly, Rescaling the data to small values. In general, input values to a neural network should be close to zero — typically we expect either data with zero-mean and unit-variance, or data in the [0, 1] range.
Some of the examples include: Firstly, neural networks don’t process raw data, like text files, encoded JPEG image files, or CSV files. They process vectorized & standardized representations. Secondly, text files need to be read into string tensors, then split into words. Finally, the words need to be indexed and turned into integer tensors. Thirdly, images need to be read and decoded into integer tensors, then converted to floating points and normalized to small values (usually between 0 and 1). Lastly, CSV data needs to be parsed, with numerical features converted to floating-point tensors and categorical features indexed and converted to integer tensors. Then each feature typically needs to be normalized to zero-mean and unit variance.