After upgrading to TDM Portal 4.10.218.0, we are now able to connect to the Snowflake warehouse. Data generation works great for less than 500k records. But we need to publish over 2M records and in that case, it looks like Portal was not committing each 10,000 records iterations into the target but is waiting to commit all the rows at the end of the entire run. Instead of finishing the execution, the job basically just fails.
Release : 4.10
TDM Portal is working as designed. When publishing data from the TDM Protal Data Generator, the batch updates work based on the publish iterations (repeat count [global publish count]), and not the table iterations (table count).
In this particular case, where the repeat count set to 1, there is no way that it would even get close to the 10,000 commit limit, and the table repeat count was 19,850,226. Unfortunately, it means that Portal was trying to cache almost 20 million rows, before sending them to the database and committing them.
Using the global publish repeat count instead of table repeat count you will see better results, and the publish job will intermittently publish records to the target table in batches of 10,000. To set this up, for this example : repeat count =19,850,226 and table repeat count = 1.
The downside to this solution is there will need to be a generator created for every table that you are working with.