Read the first 100 lines from a huge csv file on S3
There are multiple huge csv files in a storage bucket on S3; the files have different column names.
Use Java to do this: Return each csv file’s first 100 lines using the file name as the parameter.
Here is the SPL script:
A |
|
1 |
=s3_open("ASIAVSPDUYZ7XXXXXXX":"7/5xYPO7a+9Po+IE1ySbmu9UB2hWIkWek1Sqn6E4":"us-east-2":"https://s3.us-east-2.amazonaws.com") |
2 |
=s3_file(A1, "bucket1",arg_fileName) |
3 |
=A2.cursor@t().fetch(100) |
4 |
=s3_close(A1) |
5 |
return A3 |
A1: Connect to the S3 service.
A2: Use file names as the parameter to load files from the storage bucket.
A3: Fetch the first 100 lines.
A4: Close the S3 connection.
Read How to Call a SPL Script in Java to find how to integrate SPL into a Java application.
To deploy S3 external library in SPL, read https://doc.scudata.com/esproc/ext/glwbkbs.html#Deployment.
SPL Official Website 👉 https://www.scudata.com
SPL Feedback and Help 👉 https://www.reddit.com/r/esProcSPL
SPL Learning Material 👉 https://c.scudata.com
SPL Source Code and Package 👉 https://github.com/SPLWare/esProc
Discord 👉 https://discord.gg/cFTcUNs7
Youtube 👉 https://www.youtube.com/@esProc_SPL