Fine-tune VGG16

In this workflow we are fine-tuning a VGG1G network, similar to "Fine-tune VGG16 (Python)". However, we won't make use of the DL Python Learner/Executor nodes, rather we use the DL Keras Network Learner and DL Network Executor to train and execute our networks in this workflow.

Deployment

The workflows generates text in fairy tale style. It reads the previously trained TensorFlow network and predicts a sequences of index-encoded characters within a loop, and translates the sequnce of indexes into characters.

Training

The workflow builds, trains, and saves an RNN with an LSTM layer to generate new fictive fairy tales. The brown nodes define the network structure. The "Pre-Processing" metanode reads fairy tales and index-encodes them, and creates semi-overlapping sequences. The Keras Network Learner node trains the network using the index-encoded fairy tales. Finally, the trained network is converted into a TensorFlow model, and saved to a file.

Deployment

The workflow generates 200 new, fictive mountain names. It reads the previously trained TensorFlow network and predicts 200 sequences of index-encoded characters within a loop. The last node, named Extract Mountain Names translates the sequence of indexes into characters and visualizes the new fictive mountain names.

Training

The workflow builds, trains, and saves an RNN with an LSTM layer to generate new fictive mountain names. The brown nodes define the network structure. The "Pre-Processing" metdanoe reads original mountain names and index-encodes them. The Keras Network Learner node trains the network using index-encoded original mountain names. Finally, the trained network is prepared for deployment, transformed into a TensorFlow model, and saved to a file.

Neural Machine Translation

Uses a character level encoder-decoder network of LSTMs.
The encoder network reads the input sentence character by character and summarizes the sentence in its state.
This state is then used as initial state of the decoder network to produce the translated sentence one character at a time.
During prediction, the decoder also recieves its previous output as input to the next time.
For training we use a technique called "teacher forcing" i.e. we feed the actual previous character instead of the previous prediction which greatly benefits the training.

Semantic Segmentation

This workflow shows how the new KNIME Keras integration can be used to train and deploy a specialized deep neural network for semantic segmentation.
This means that our network decides for each pixel in the input image, what class of object it belongs to.

Subscribe to Deep Learning

What are you looking for?