Data Pipeline Project

Intended to transform how systems connect to one another, how data moves between them and how data is treated in our enterprise, the data pipeline project is a multiyear collaborative initiative undertaken by various teams at Kent State University led by the Division of Information Technology’s Systems Development & Innovations team + Data Management & Analytics team.   

Project Vision

Achieve operational excellence through higher efficiencies in data flows by implementing a solution that connects data and users in a uniform, standardized, and visible manner and that serves as the foundation for future decisions and integrations

Image
The Data Pipeline sits in between data consumers and data producers, and provides a uniform path for data access, guaranteeing authorization, logging and security, as well as caching and transformation functionality.

Key Objectives

  • Uniform data flow​
    Standard data formats, logic in one place, one source of data regardless of target and technology​

  • Uniform data governance​
    Access management, data usage visibility, standard definitions, cleansing and deduplication​

  • Event driven​
    Push changes, no more polling, leaner transactions, cache policies, real-time updates, lighter system loads​

  • Self-documenting​
    All data points auditable and reportable​

  • Reporting​ & Analytics
    Provide ‘data as a service’, multiple sources combined, access to real-time analytics, holistic view​

  • Flexible hosting​
    Same code on prem or cloud, fault tolerant, scalable