What is Data Fabric?
“It is a set of hardware infrastructure, software, tools and resources to implement, administer, manage and execute data operations in the company, including, acquisition, transformation, storage, distribution, integration, replication, availability, security, protection , disaster recovery, presentation, analysis, preservation, retention, backup, recovery, archiving, recall, deletion, monitoring and capacity planning, across all data storage platforms and enabling application use to meet the company's data needs company". (Alan McSweeney)
The Data Fabric is a new way to operate the data using all resources and technology innovations available to get business value, including multimodel databases, Analytics, AI, ESB/SOA, microservices and API Management.
Data Fabric principles
Alan McSweeney listed the following principles when using Data Fabric:
![]() |
Administration, management and control - maintain control and be able to manage and administer data, regardless of where it is located |
![]() |
Stability, reliability and consistency - common tools and utilities used to deliver a stable and reliable Data Fabric across all layers |
|
![]() |
Security - Security standards across the Data Fabric, governance automation, compliance and risk management |
![]() |
Open, flexible and free choice - ability to choose and change data storage, access and location |
|
![]() |
Automation - automated management and maintenance activities, DevOps and DevSecOps |
![]() |
Performance, recovery, access and use - applications and users can gain access to data when needed, as needed and in a format in which it is usable |
|
![]() |
Integration - All components interoperate together at all layers |
Data Fabric Architecture
McSweeney designed a conceptual diagram to detail Data Fabric into an organization, see:
You can see that is needed some technologies to get a fluid data input, processing and output operating toghether to "fabric" the data and deliver business value to the data consumers. These elements can be resumed by this diagram:
The operation of the data is in Data Intake Gateway, using ESB and API Gateway technology to capture, orchestrate, transform, enrich and integrate data assets into to corporate data assets.
The result of the data operation is consumed using Analytics and AI to allows data consumers to get the insights.
The multimodel repositories are key player too, because the volume and variety of the data and the requirement of "golden data", the "single channel of truth".
InterSystems IRIS and the Data Fabric
The InterSystems IRIS is a Data Fabric platform thats enable a Data Fabric architecture into the organizations, see:
DATA FABRIC COMPONENT |
INTERSYSTEMS IRIS COMPONENT |
|
Multi-Purpose Repository |
IRIS Database |
|
|
|
|
Data Intake / Gateway via Integration Bus /Service Bus and API Gateway |
|
IRIS Interoperability |
![]() |
|
|
Data Extraction / Ingestion, Transformation and Loading |
||
![]() |
||
Analysis and Reporting Utilities |
|
IRIS Analytics |
![]() |
|
|
AI utilities |
||
![]() |
Conclusion
The InterSystems IRIS is not a simple database, or interoperability platform, it is a central player to get your Data Fabric. If you use another solutions from other companies, you need buy 4 to 7 solutions, but with InterSystems you create your Data Fabric with only solution, composed by multimodel DB, ESB/APIM, Analytics and AI. It is cheaper and more simple to you.
Learn more in: https://pt.slideshare.net/alanmcsweeney/designing-an-enterprise-data-fabric
Great article about Data Fabric!
Could you share with us the diagram of Data Fabric Architecture?
The diagram looks awesome and would like to see the details
Thanks, diagram into: https://pt.slideshare.net/alanmcsweeney/designing-an-enterprise-data-fabric