This workflow graphically explores attendance at baseball games together with weather conditions and calendar information using a bar chart and a sunburst chart. Attendance data are stored in a Big Query Database. Weather conditions data are stored in a SQLite database. Will they blend?
This workflow implements a DWH operation, configuring and launching a Snowflake in-cloud data warehouse instance. Sales orders are read from an Excel file, return orders supplied from our e-commerce platform's API in JSON format are filtered out, all the clean correct orders are uploaded to a Snowflake table, dynamically created by the Database Writer node.
This workflow jams together data from not 1, not 2, but from 6 databases! That is: MySQL, MongoDB, MS SQL Server, MariaDB, Oracle, PostgreSQL. The use case is a Next Best Offer, modelling the likelihood of a customer to buy a second product. This workflow is a variation of the workflow we build together at courses on KNIME Analytics Platform.
Today's challenge is to blend the data between a Teradata Aster database and a KNIME table in the KNIME Analytics Platform. Why these two? Teradata Aster is a database system in use at many companies around the world, and KNIME tables are an easy way to store and access models built in other KNIME workflows. The data is from a collection of open-source heart disease data sets available in .txt format. They are available at http://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/ .
This workflow blends data between two different relational DBMS in KNIME Analytics Platform. It focuses on two common relational database systems, i.e. MS Access and H2. Why these two? Both are relatively easy to use, and used mainly in individual departments or small-to-medium business sectors.
The data is a standard baseball encyclopedia and it is available in text form, in .csv file, and in relational database format. Data are available at http://www.seanlahman.com/baseball-archive/statistics/.
This workflow demonstrates the different database binner nodes that allow you to create new binning columns for numerical columns. An example would be the conversion of a numerical column that contains the age in years into a categorical column with values such as children and adults. The demonstrated nodes are the Database Auto-Binner node that automatically creates the bin boundaries e.g.
Shows a KNIME workflow with a number of database nodes that directly work inside a database. Note, this flow uses the sqlite library that is included within this workflow and needs to be registered in the KNIME preference page.
This workflow creates an SQL statement that generates a pivot table. A pivot table allows you to quickly summarize your data based on a pivot, aggregation and value columns.
This workflow demonstrates how you to read/write png images from/to a database. The Binary Object nodes are available via the File Handling extension.
Requirements: KNIME File Handling Nodes (Go to File->Install KNIME Extensions: KNIME & Extensions)
This workflow creates an SQL statement that allows you to extract a sample of data from a database. The node also supports stratified sampling, which is the preferred way to sample from populations with varying subpopulation sizes.