Skip to main content

Your web browser is out of date. Update your browser for more security, speed and the best experience on this site.

Update your browser
Wondering where to start? Jump start your automation journey with this comprehensive Buyer's Guide.
DOWNLOAD NOW
SMA Technologies Logo
  • Products
    Op Con Icon Large 256x256

    OpCon Core Automation

    Reduce risk and improve efficiency with automated end-of-day ACH and draft processing.

    Wripple 288

    Integrate

    Our integration platform designed to connect systems and synchronize data in real time.

    Encapture icon

    Encapture Intelligent Document Processing

    Eliminate manual data scrubbing for CRA, HMDA, and 1071 with our intelligent document processing solution.

    Use Cases Use Cases Integrations Integrations Case Studies Case Studies
  • Solutions

    Industries

    • Banks
    • Credit Unions
    • CUSOs & Data Centers
    • Insurance

    Teams

    • Operations
    • IT
    • Leaders
    Use Cases Use Cases Integrations Integrations Case Studies Case Studies
  • Resources

    Learn

    • Blog
    • eBooks & Reports
    • Webinars
    • Demo Videos
    • View All Resources

    Support

    • Support Options
    • SMA Community
    • Events
    • Product Training
    • Documentation
    • Technical Support Portal

    About

    • About SMA Technologies
    • News
    • Leadership
    • Careers
    • Partners & Resellers
    • Contact Us
  • Contact Us
  • Log In

    OpCon
  • Home
  • Blog
  • The Basics of Data Pipeline Automation

The Basics of Data Pipeline Automation

Read Time
4 mins
Last updated
Feb 4th, 2021
Share

If your business is like many others today, you're using multiple data pipelines, quite possibly of different types. Although you might have heard the term ETL (extract, transform, load) pipeline, too, a data pipeline is not exactly the same thing as an ETL pipeline. Instead, an ETL is a certain subset of data pipeline with its own characteristics. Increasingly, orchestration platforms are coming into play to integrate ETL and other data pipeline automation for smooth and highly manageable end-to-end functioning.

What is a Data Pipeline?

Data pipeline is a broad term which simply refers to an IT system for moving data from one system to another. Businesses leverage data pipelines for many different purposes, including the following:

  • Handling large volumes of data from different sources
  • Automating Data warehousing
  • Performing data analysis
  • Taking advantage of cloud storage space
  • Maintaining separate or “siloed” data sources

How Do Data Pipelines Work?

In some pipelines, the data is processed in real-time, or “streamed,” whereas in other pipelines, it is not. The data from the pipeline might be loaded to a database or data warehouse, or to any of a number of other targets, such as a data lake, an Amazon Web Services (AWS) bucket, a visualization app, or Salesforce, for example.

In some pipelines, the data is “transformed,” but in others, it is not. When data is transformed, it is converted into a format that can be easily used by various applications.

Types of Data Pipelines

While data pipeline is a generic term, there are three main types of pipelines, based on the purpose of the pipeline. Some pipelines can belong to more than one of these categories. Pipelines can use a variety of data pipeline and workflow management tools in carrying out their purposes.

  • Real-time. Real-time data pipelines are optimized to process data as it arrives. Real-time processing is needed when you are processing data from a streaming source, such as the data from financial markets or telemetry from connected Internet of things (IoT) devices.
  • Batch. Batch processing data pipelines are generally used in situations where you want to move large amounts of data at regular intervals, but the movement doesn’t need to happen in real time. For instance, you might move marketing data on a weekly basis into a data warehouse for later analysis. Batch data can be stored in until it’s ready for processing.
  • Cloud-native. Optimized to work with cloud-based data, these pipelines are capable of creating complex data processing workloads. One example is AWS Data Pipeline, an Amazon web service for transforming and automating cloud data. You might want to use a cloud-native pipeline when migrating information to the cloud for highly sophisticated analysis.

Data Pipeline vs. ETL

ETL data pipelines are useful for centralizing disparate data sources, to give the company a consolidated version of information from different systems, such as applications, databanks, business systems, and sensors. These are not real time pipelines, though. ETL pipelines typically run in batches. For instance, you might configure batches to run at 1 a.m. each night when system traffic is low.

Before transformation, information is extracted from several heterogenous sources. After the data is transformed into a consistent format, it gets loaded into a target ETL data warehouse or some database

Orchestration Platforms for Data Pipeline Automation

ETL is often handled by a legacy system running IBM i or z/OS. A common business challenge is getting that data to third-party apps for processing, analysis, and reporting and then moving the output to the relevant end users.

An orchestration platform like OpCon can resolve this issue by serving as the bridge between systems, providing an integration point. OpCon communicates with large numbers of third-party legacy, cloud, and hybrid apps via dedicated connector, agent, or application programming interface (API).

OpCon was designed with flexibility and backwards compatibility in mind, so it can talk to a 25-year-old on-premises IBM midrange server or a Kubernetes container cluster hosted in the cloud on AWS. OpCon can move data around to wherever you point it. ETL integrations alone include Informatica, Oracle, MYSQL, Teradata, and Mongo DB databases.

An orchestration platform also acts as a single point of control for all automated pipelines. Many companies have data pipeline apps, but a lot of the processes to move data through the pipeline are either manual or poorly automated. As a single point of control, OpCon offers a well orchestrated alternative to a bunch of different interfaces controlling various automated processes.

Reporting is another key part of the automation process. Typically, third party apps provide only limited scheduling capabilities for reports. In contrast, OpCon provides highly advanced options for both the scheduling and movement of reports, to keep managers solidly in the loop about the progress of data pipeline automation processes. 

Interested in finding out more about how your organization can use automation to orchestrate data pipelines? Fill out the form below and we’ll be happy to have a conversation with you about the challenges your business wants to solve.

In this article

What is data pipeline automation and what tools can help you implement it? We answer these questions and suggest OpCon as a data pipeline automation platform.

Become the innovator of your team!

Sign up for our monthly Automator Newsletter to supercharge your automation expertise!
<script charset="utf-8" type="text/javascript" src="//js.hsforms.net/forms/embed/v2.js"></script> <script> hbspt.forms.create({ region: "na1", portalId: "7300903", formId: "13b9d499-ce3f-4f4f-abec-8413cb7f824a" }); </script>

About

  • About SMA
  • Leadership
  • Careers
  • Partners & Resellers
  • Contact Us
  • Request A Demo

Solutions

  • Encapture Intelligent Document Processing
  • OpCon Core Automation
  • Integration Platform
  • Robotic Process Automation
  • Managed File Transfer
  • Managed Automation Services

Resources

  • Blog
  • eBooks & Reports
  • Webinars
  • Demo videos
  • The Automator Newsletter

Support

  • Community
  • Product Training
  • Documentation
  • Contact Support
  • Legal Documents
  • Privacy Policy
  • Cookie Policy
  • Terms and Conditions
  • Sitemap

© SMA Technologies. All rights reserved, 2025