environment can be installed from the R console using the BiocManager::install
command. The associated data package systemPipeRdata
can be installed the same way. The latter is a helper package for generating systemPipeR
workflow environments with a single command containing all parameter files and
sample data required to quickly test and run workflows.
if (!requireNamespace("BiocManager", quietly = TRUE)) install.packages("BiocManager")
Please note that if you desire to use a third-party command-line tool, the particular
tool and dependencies need to be installed and exported in your PATH.
Loading package and documentation
library("systemPipeR") # Loads the package
library(help = "systemPipeR") # Lists package info
vignette("systemPipeR") # Opens vignette
Please add the “systemPipeR” tag to your question, and the package authors will
automatically receive an alert.
We appreciate receiving reports of bugs in the functions or documentation and
suggestions for improvement. For that, please consider opening an issue at
systemPipeR expects a project directory structure that consists of a directory
where users may store all the raw data, the results directory that will be reserved
for all the outfiles files or new output folders, and the parameters directory.
This structure allows reproducibility and collaboration across the data science
team since internally relative paths are used. Users could transfer this project
to a different location and still be able to run the entire workflow. Also, it
increases efficiency and data management once the raw data is kept in a separate
folder and avoids duplication.
helper package, provides pre-configured workflows, reporting
templates, and sample data loaded as demonstrated below. With a single command,
the package allows creating the workflow environment containing the structure
described here (see Figure 1).
Directory names are indicated in green.
Users can change this structure as needed, but need to adjust the code in their
This is the root directory of the R session running the workflow.
Run script ( *.Rmd) and sample annotation (targets.txt) files are located here.
Note, this directory can have any name (e.g.myproject). Changing its name does not require any modifications in the run script(s).
param/cwl/: This subdirectory stores all the parameter and configuration files. To organize workflows, each can have its own subdirectory, where all *.cwl and *input.yml files need to be in the same subdirectory.
Raw data (e.g. FASTQ files)
FASTA file of reference (e.g. reference genome)
Analysis results are usually written to this directory, including: alignment, variant and peak files (BAM, VCF, BED); tabular result files; and image/plot files
Note, the user has the option to organize results files for a given sample and analysis step in a separate subdirectory.
The following parameter files are included in each workflow template:
targets.txt: initial one provided by user; downstream targets_*.txt files are generated automatically
*.param/cwl: defines parameter for input/output file operations, e.g.:
*_run.sh: optional bash scripts
Configuration files for computer cluster environments (skip on single machines):
.batchtools.conf.R: defines the type of scheduler for batchtools pointing to template file of cluster, and located in user’s home directory
batchtools.*.tmpl: specifies parameters of scheduler used by a system, e.g. Torque, SGE, Slurm, etc.
Structure of initial targets file
The targets file defines all input files (e.g. FASTQ, BAM, BCF) and sample
comparisons of an analysis workflow. The following shows the format of a sample
targets file included in the package. It also can be viewed and downloaded
from systemPipeR’s GitHub repository here.
In a target file with a single type of input files, here FASTQ files of
single-end (SE) reads, the first column describe the path and the second column
represents a unique id name for each sample. The third column called Factor
represents the biological replicates. All subsequent columns are additional
information, and any number of extra columns can be added as needed.
Users should note here, the usage of targets files is optional when using
systemPipeR's new workflow management interface. They can be replaced by a standard YAML
input file used by CWL. Since for organizing experimental variables targets
files are extremely useful and user-friendly. Thus, we encourage users to keep using
Structure of targets file for single-end (SE) samples
Structure of targets file for “Hello World” example
In this example, targets file presents only two columns, which the first one
are the different phrases used by the echo command-line and the second column
it is the sample id. The id column is required, and each sample id should be unique.
##  "# Project ID: Arabidopsis - Pseudomonas alternative splicing study (SRA: SRP010938; PMID: 24098335)"
##  "# The following line(s) allow to specify the contrasts needed for comparative analyses, such as DEG identification. All possible comparisons can be specified with 'CMPset: ALL'."
##  "# <CMP> CMPset1: M1-A1, M1-V1, A1-V1, M6-A6, M6-V6, A6-V6, M12-A12, M12-V12, A12-V12"
##  "# <CMP> CMPset2: ALL"
The function readComp imports the comparison information and stores it in a
list. Alternatively, readComp can obtain the comparison information from
the corresponding SYSargsList step (see below). Note, these header lines are
optional. They are mainly useful for controlling comparative analyses according
to certain biological expectations, such as identifying differentially expressed
genes in RNA-Seq experiments based on simple pair-wise comparisons.
readComp(file = targetspath, format = "vector", delim = "-")
After the step which required the initial targets file information, the downstream
targets files are created automatically (see Figure 2).
Each step that uses the previous step outfiles as an input, the new targets input
will be managed internally by the workflow instances, establishing connectivity
among the steps in the workflow.
systemPipeR provides features to automatically and systematically build this
connection, providing security that all the samples will be managed efficiently
Figure 2: *`systemPipeR`* automatically creates the downstream `targets` files based on the previous steps outfiles. A) Usually, users provide the initial `targets` files, and this step will generate some outfiles, as demonstrated on B. Then, those files are used to build the new `targets` files as inputs in the next step. *`systemPipeR`* (C) manages this connectivity among the steps automatically for the users.
Structure of the new parameters files
The parameters and configuration required for running command-line software are
provided by the widely used community standard Common Workflow Language (CWL)
(Amstutz et al. 2016), which describes parameters analysis workflows in a generic
and reproducible manner. For R-based workflow steps, param files are not required.
For a complete overview of the CWL syntax, please see the section below.
Also, we have a dedicated section explain how to systemPipeR establish the
connection between the CWL parameters files and the targets files. Please see here.