So what’s the big deal? The Snowflake ecosystem is pretty robust and there are many ETL/ELT and data engineering tools out there. Many of these have been partnered with Snowflake for 4+ years and have pretty good adoption and success rates.
So why would a customer want to use Coalesce to build their data pipeline in Snowflake?
For starters, like Snowflake, Coalesce was built in the cloud, for the cloud, and delivered as a service. It was built exclusively to support Snowflake, and as such is aware of many cool Snowflake features.
No other data engineering tool can really say that. Like the legacy RDBMS’ before, most of the data movement and transformation tools are either still legacy, on prem (i.e., they require a server in a data center or at best a VM in the cloud), or they are cloud-washed and have been moderately refactored to run in the cloud. That means there is still a lot of management, admin, skill, and knowledge required to make them useful.
And the few that are cloud-based and have wide adoption, require you to be a coder or programmer to use the product. While this is great for some data engineers, not every organization has enough of these data engineering developers to go around. Coalesce is different in that it has a great, easy to use graphical interface that allows a SQL-savvy analyst or architect to easily define the transformations and automatically generate all the Snowflake code to move and transform the data in the platform.
Let me emphasize that again – Coalesce generates the Snowflake native code. This is a low-code or no-code platform.
This is huge. It helps with agility and quality. A few clicks and you can change or add new sources, targets, and transformations. All using column-aware metadata. Since the code is generated, there are never any syntax or coding errors. Faster, better code, means shorter time to value.
Add to this that Coalesce can generate Streams and Tasks, automatically, to do advanced CDC, and can automatically extract JSON from a VARIANT and flatten it into columns in a generated table means data engineers can be way more productive. All with zero coding!
Then there are all the built-in templates for building Data Vault objects like Hubs, Links, and Sat, plus standard dimensional objects like Facts and Dimensions (which are SCD2 compliant by default). Even better – you can define your own templates called User Defined Nodes for any special object types or standards an organization may want to enforce.
All of this means a data engineer or architect or analyst can be productive immediately and start delivering value on day one. And the icing on the cake is that you get all the documentation, lineage, impact analysis, and other meta data at the push of a button.
This is the future of data engineering.