Securely Transferring Sensitive
Data Between Clouds
Outcome:
Devis can secure and monitor with their Government cloud anytime.
Enabled to add more layers of encryption and protect their sensitive PII data automatically.
Connect to a variety of 200+ data sources and data targets without requiring any coding or complex configuration.
Data synchronization between sources and targets
Replicate and transform data (as needed) on the fly
Situation
Devis was asked to assist in the design and engineering of an automated data pipeline system that could move hundreds of production tables and their PII-sensitive data from one different cloud platform to another as part of an integrated data strategy for one of our Federal customers.
The ultimate objective was to create a code-free, almost real-time data pipeline system that could move sensitive data from a proprietary SaaS cloud platform to a data lake/warehouse architecture inside a government cloud platform in an automated, dependable, and economical manner.
Challenges
Sensitive data transfers across clouds came with a number of noteworthy problems. The team rapidly came to the conclusion that writing hundreds of API scripts to query the SaaS proprietary data source and move the data to our Government cloud-based data lake was not practical, economical, or sustainable. As part of our operational production environment, the Devis team searched the commercial and government marketplaces for solutions to automate, transform, and sustain the transfer of this data.
Required technology support and architecture
After analyzing solutions that are both government- and commercially-compliant, they came to the conclusion that the most secure and efficient solution would need to satisfy at least three strict requirements:
The solution must be either FedRAMPed-approved or be able to be self-hosted in our their secure and managed VPC within a Government-approved cloud;
The tool must be able to connect to a wide variety of data sources and data targets without requiring any coding; and
The platform must enable them to perform near-real-time replication and transformation (ETL/ELT) of data on the fly to meet the Government’s unique end-point data requirements.
SaaS-based or utilizing their VPC in a cloud that did not comply with government regulations constituted a large portion of the substitute solutions they looked at. The government's duty to adopt and enforce security measures that safeguarded its sensitive personally identifiable information (PII) was broken by these solutions, which qualified as a first hard requirement. Devis’s data would have had to pass via unapproved security zones, unprotected cloud environments, and/or on-premise settings due to the architecture of those alternative options before returning to the secured data lake of their government cloud.
Lyftrondata as the Best-fit solution
Outside-The-Box Requirements
Based on their investigation, Lyftrondata was found to be the most suitable option for implementing their secure and automated data pipeline solution between these two very diverse cloud platforms. It fulfilled their three demanding requirements right out of the box, enabling them to overcome the obstacles they encountered.
Lyftrondata can be self-hosted on an EC2 instance in a VPC environment that they can secure and monitor with their Government cloud, despite not being a FedRAMP-approved product. Lyftrondata also made it possible for them to automatically protect their sensitive PII data and add additional layers of encryption.
Additionally, it has an amazing collection of more than 150 on-premise and cloud data source API connectors that enable users to connect to a wide range of data destinations and sources without the need for complicated configuration or scripting. Because of the architecture of Lyftrondata, they were able to build up a continuously selected data synchronization between their data targets and sources.
Finally, Lyftrondata enables users to perform ETL/ELT operations on their data before it reaches its intended data-target environment, which can include cloud-native services and tools like Amazon Redshift, AWS S3, SQL Server/RDS, and COTS products like Tableau, Alteryx, or Databricks. This gives users the flexibility to replicate and transform their data on the fly as needed. Many services and tools are natively integrated with Lyftrondata.
The grand result
Since September 2020, when they integrated Lyftrondata into their global cloud production environment, which operates around-the-clock, they have been able to securely and swiftly move data from a single cloud-based data source to several cloud-based data targets without requiring them to create or manage any code. They are happy with Lyftrondata's performance, inventiveness, and adaptability thus far as a vital component of their present production environment's toolkit and pipeline. To maintain the success of their objective, they intend to keep utilizing Lyftrondata to swiftly and easily adjust to their customers' changing data strategies and production operations.

Are you unsure about the best option for setting up your data infrastructure?
