Writes query results from a
SELECT
statement to the specified data format. Supported formats forUNLOAD
includeApache Parquet
,ORC
,Apache Avro
, andJSON
.CSV
is the only output format used by theAthena
SELECT
query, but you can useUNLOAD
to write the output of aSELECT
query to the formats thatUNLOAD
supports.Although you can use the
CTAS
statement to output data in formats other thanCSV
, those statements also require the creation of a table in Athena. TheUNLOAD
statement is useful when you want to output the results of aSELECT
query in anon-CSV
format but do not require the associated table. For example, a downstream application might require the results of aSELECT
query to be inJSON
format, andParquet
orORC
might provide a performance advantage overCSV
if you intend to use the results of theSELECT
query for additional analysis.
noctua v-2.2.0.9000+
can now leverage this functionality with the unload
parameter within dbGetQuery
, dbSendQuery
, dbExecute
. This functionality offers faster performance for mid to large result sizes.
unload=FALSE
(Default)Regular query on AWS Athena
and then reads the table data as CSV
directly from AWS S3
.
PROS:
CONS:
unload=TRUE
Wraps the query with a UNLOAD
and then reads the table data as parquet
directly from AWS S3
.
PROS:
CONS:
order by
due to multiple parquet files being produced by AWS Athena.Set up AWS Athena
table (example taken from AWS Data Wrangler: Amazon Athena Tutorial):
# Python
import awswrangler as wr
import getpass
= getpass.getpass()
bucket = f"s3://{bucket}/data/"
path
if "awswrangler_test" not in wr.catalog.databases().values:
"awswrangler_test")
wr.catalog.create_database(
= ["id", "dt", "element", "value", "m_flag", "q_flag", "s_flag", "obs_time"]
cols
= wr.s3.read_csv(
df ="s3://noaa-ghcn-pds/csv/189",
path=cols,
names=["dt", "obs_time"]) # Read 10 files from the 1890 decade (~1GB)
parse_dates
wr.s3.to_parquet(=df,
df=path,
path=True,
dataset="overwrite",
mode="awswrangler_test",
database="noaa"
table;
)
="awswrangler_test", table="noaa") wr.catalog.table(database
Benchmark unload
method using noctua
.
# R
library(DBI)
<- dbConnect(noctua::athena())
con
dbGetQuery(con, "select count(*) as n from awswrangler_test.noaa")
# Info: (Data scanned: 0 Bytes)
# n
# 1: 29554197
# Query ran using CSV output
system.time({
= dbGetQuery(con, "SELECT * FROM awswrangler_test.noaa")
df
})# Info: (Data scanned: 80.88 MB)
# user system elapsed
# 57.004 8.430 160.567
dim(df)
# [1] 29554197 8
::noctua_options(cache_size = 1)
noctua
# Query ran using UNLOAD Parquet output
system.time({
= dbGetQuery(con, "SELECT * FROM awswrangler_test.noaa", unload = T)
df
})# Info: (Data scanned: 80.88 MB)
# user system elapsed
# 21.622 2.350 39.232
dim(df)
# [1] 29554197 8
# Query ran using cached UNLOAD Parquet output
system.time({
= dbGetQuery(con, "SELECT * FROM awswrangler_test.noaa", unload = T)
df
})# Info: (Data scanned: 80.88 MB)
# user system elapsed
# 13.738 1.886 11.029
dim(df)
# [1] 29554197 8
Method | Time (seconds) |
---|---|
unload=FAlSE |
160.567 |
unload=TRUE |
39.232 |
Cache unload=TRUE |
11.029 |
From this simple benchmark test there is a significant improvement in the performance when querying AWS Athena
while unload=TRUE
.
Note: Benchmark ran on AWS Sagemaker
ml.t3.xlarge
instance.