Norway


;m having a huge table consisting of billions(20) of records and my source file as an input is the Target parquet file.

Everyday I get a delta incoming file to update existing records in Target folder and append new .

Using spark SQL dataframe, is there a way to read and update particular partitions of the parquet file?



Source link

No tags for this post.

LEAVE A REPLY

Please enter your comment!
Please enter your name here