Type: | Package |
Title: | Connect to 'AWS Athena' using R 'AWS SDK' 'paws' ('DBI' Interface) |
Version: | 2.6.3 |
Description: | Designed to be compatible with the 'R' package 'DBI' (Database Interface) when connecting to Amazon Web Service ('AWS') Athena https://aws.amazon.com/athena/. To do this the 'R' 'AWS' Software Development Kit ('SDK') 'paws' https://github.com/paws-r/paws is used as a driver. |
Imports: | data.table (≥ 1.12.4), DBI (≥ 0.7), methods, paws (≥ 0.2.0), stats, utils, uuid (≥ 0.1-4) |
Suggests: | arrow, bit64, dplyr (≥ 1.0.0), dbplyr (≥ 2.3.3), testthat, tibble, vroom (≥ 1.2.0), covr, knitr, rmarkdown, jsonify, jsonlite |
VignetteBuilder: | knitr |
Depends: | R (≥ 3.2.0) |
License: | MIT + file LICENSE |
Encoding: | UTF-8 |
RoxygenNote: | 7.3.3 |
URL: | https://dyfanjones.github.io/noctua/, https://github.com/DyfanJones/noctua |
BugReports: | https://github.com/DyfanJones/noctua/issues |
Collate: | 'utils.R' 'dplyr_integration.R' 'noctua.R' 'Driver.R' 'Connection.R' 'DataTypes.R' 'File_Parser.R' 'Options.R' 'fetch_utils.R' 'Result.R' 'Table.R' 'View.R' 'athena_low_api.R' 'column_parser.R' 'sql_translate_utils.R' 'sql_translate_env.R' 'zzz.R' |
NeedsCompilation: | no |
Packaged: | 2025-09-15 19:03:56 UTC; dyfanjones |
Author: | Dyfan Jones [aut, cre] |
Maintainer: | Dyfan Jones <dyfan.r.jones@gmail.com> |
Repository: | CRAN |
Date/Publication: | 2025-09-15 19:40:02 UTC |
noctua: a DBI interface into Athena using paws SDK
Description
noctua provides a seamless DBI interface into Athena using the R package paws.
Goal of Package
The goal of the noctua
package is to provide a DBI-compliant interface to Amazon’s Athena
using paws
software development kit (SDK). This allows for an efficient, easy setup connection to Athena using the paws
SDK as a driver.
AWS Command Line Interface
As noctua is using paws
as it's backend, AWS Command Line Interface (AWS CLI) can be used
to remove user credentials when interacting with Athena.
This allows AWS profile names to be set up so that noctua can connect to different accounts from the same machine, without needing hard code any credentials.
Author(s)
Maintainer: Dyfan Jones dyfan.r.jones@gmail.com
See Also
Useful links:
Report bugs at https://github.com/DyfanJones/noctua/issues
Athena Connection Methods
Description
Implementations of pure virtual functions defined in the DBI
package
for AthenaConnection objects.
Method to get Athena schema, tables and table types return as a data.frame
This method returns all partitions from Athena table.
Executes a statement to return the data description language (DDL) of the Athena table.
Usage
## S4 method for signature 'AthenaConnection'
show(object)
## S4 method for signature 'AthenaConnection'
dbDisconnect(conn, ...)
## S4 method for signature 'AthenaConnection'
dbIsValid(dbObj, ...)
## S4 method for signature 'AthenaConnection,character'
dbSendQuery(conn, statement, unload = athena_unload(), ...)
## S4 method for signature 'AthenaConnection,character'
dbSendStatement(conn, statement, unload = athena_unload(), ...)
## S4 method for signature 'AthenaConnection,character'
dbExecute(conn, statement, unload = athena_unload(), ...)
## S4 method for signature 'AthenaConnection,ANY'
dbDataType(dbObj, obj, ...)
## S4 method for signature 'AthenaConnection,data.frame'
dbDataType(dbObj, obj, ...)
## S4 method for signature 'AthenaConnection,character'
dbQuoteString(conn, x, ...)
## S4 method for signature 'AthenaConnection,POSIXct'
dbQuoteString(conn, x, ...)
## S4 method for signature 'AthenaConnection,Date'
dbQuoteString(conn, x, ...)
## S4 method for signature 'AthenaConnection,SQL'
dbQuoteIdentifier(conn, x, ...)
dbGetTables(conn, ...)
## S4 method for signature 'AthenaConnection'
dbGetTables(conn, catalog = NULL, schema = NULL, ...)
## S4 method for signature 'AthenaConnection,character'
dbListFields(conn, name, ...)
## S4 method for signature 'AthenaConnection,character'
dbExistsTable(conn, name, ...)
## S4 method for signature 'AthenaConnection,Id'
dbExistsTable(conn, name, ...)
## S4 method for signature 'AthenaConnection,character'
dbRemoveTable(conn, name, delete_data = TRUE, confirm = FALSE, ...)
## S4 method for signature 'AthenaConnection,Id'
dbRemoveTable(conn, name, delete_data = TRUE, confirm = FALSE, ...)
## S4 method for signature 'AthenaConnection,character'
dbGetQuery(conn, statement, statistics = FALSE, unload = athena_unload(), ...)
## S4 method for signature 'AthenaConnection'
dbGetInfo(dbObj, ...)
dbGetPartition(conn, name, ..., .format = FALSE)
## S4 method for signature 'AthenaConnection'
dbGetPartition(conn, name, ..., .format = FALSE)
dbShow(conn, name, ...)
## S4 method for signature 'AthenaConnection'
dbShow(conn, name, ...)
## S4 method for signature 'AthenaConnection'
dbBegin(conn, ...)
## S4 method for signature 'AthenaConnection'
dbCommit(conn, ...)
## S4 method for signature 'AthenaConnection'
dbRollback(conn, ...)
Arguments
object |
Any R object |
conn |
A DBI::DBIConnection object, as returned by dbConnect(). |
... |
Other parameters passed on to methods. |
dbObj |
An object inheriting from |
statement |
a character string containing SQL. |
unload |
boolean input to modify |
obj |
An R object whose SQL type we want to determine. |
x |
A character vector to quote as string. |
catalog |
Athena catalog, default set to NULL to return all tables from all Athena catalogs |
schema |
Athena schema, default set to NULL to return all tables from all Athena schemas. Note: The use of DATABASE and SCHEMA is interchangeable within Athena. |
name |
The table name, passed on to
|
delete_data |
Deletes S3 files linking to AWS Athena table |
confirm |
Allows for S3 files to be deleted without the prompt check. It is recommend to leave this set to |
statistics |
If set to |
.format |
re-formats AWS Athena partitions format. So that each column represents a partition
from the AWS Athena table. Default set to |
Value
dbGetTables()
returns a data.frame.
data.frame that returns all partitions in table, if no partitions in Athena table then function will return error from Athena.
dbShow()
returns SQL
characters of the Athena table DDL.
Slots
ptr
a list of connecting objects from the SDK paws package.
info
a list of metadata objects
quote
syntax to quote sql query when creating Athena ddl
Athena Driver Methods
Description
Implementations of pure virtual functions defined in the DBI
package
for AthenaDriver objects.
Usage
## S4 method for signature 'AthenaDriver'
show(object)
## S4 method for signature 'AthenaDriver,ANY'
dbDataType(dbObj, obj, ...)
## S4 method for signature 'AthenaDriver,list'
dbDataType(dbObj, obj, ...)
Arguments
object |
Any R object |
dbObj |
A object inheriting from DBI::DBIDriver or DBI::DBIConnection. |
obj |
An R object whose SQL type we want to determine. |
... |
Other arguments passed on to methods. |
List objects in a connection.
Description
Lists all of the objects in the connection, or all the objects which have specific attributes.
Usage
AthenaListObjects(connection, ...)
Arguments
connection |
A connection object, as returned by |
... |
Attributes to filter by. |
Details
When used without parameters, this function returns all of the objects known
by the connection. Any parameters passed will filter the list to only objects
which have the given attributes; for instance, passing schema = "foo"
will return only objects matching the schema foo
.
Value
A data frame with name
and type
columns, listing the
objects.
Preview the data in an object.
Description
Return the data inside an object as a data frame.
Usage
AthenaPreviewObject(connection, rowLimit, ...)
Arguments
connection |
A connection object, as returned by |
rowLimit |
The maximum number of rows to display. |
... |
Parameters specifying the object. |
Details
The object to previewed must be specified as one of the arguments
(e.g. table = "employees"
); depending on the driver and underlying
data store, additional specification arguments may be required.
Value
A data frame containing the data in the object.
Athena Result Methods
Description
Implementations of pure virtual functions defined in the DBI
package
for AthenaResult objects.
Returns AWS Athena Statistics from execute queries DBI::dbSendQuery
Usage
## S4 method for signature 'AthenaResult'
dbClearResult(res, ...)
## S4 method for signature 'AthenaResult'
dbFetch(res, n = -1, ...)
## S4 method for signature 'AthenaResult'
dbHasCompleted(res, ...)
## S4 method for signature 'AthenaResult'
dbIsValid(dbObj, ...)
## S4 method for signature 'AthenaResult'
dbGetInfo(dbObj, ...)
## S4 method for signature 'AthenaResult'
dbColumnInfo(res, ...)
dbStatistics(res, ...)
## S4 method for signature 'AthenaResult'
dbStatistics(res, ...)
## S4 method for signature 'AthenaResult'
dbGetStatement(res, ...)
Arguments
res |
An object inheriting from DBI::DBIResult. |
... |
Other arguments passed on to methods. |
n |
maximum number of records to retrieve per fetch. Use |
dbObj |
An object inheriting from DBI::DBIResult, DBI::DBIConnection, or DBI::DBIDriver. |
Value
dbStatistics()
returns list containing Athena Statistics return from paws
.
Note
If a user does not have permission to remove AWS S3 resource from AWS Athena output location, then an AWS warning will be returned.
For example AccessDenied (HTTP 403). Access Denied
.
It is better use query caching or optionally prevent clear AWS S3 resource using noctua_options
Convenience functions for reading/writing DBMS tables
Description
Convenience functions for reading/writing DBMS tables
Usage
## S4 method for signature 'AthenaConnection,character,data.frame'
dbWriteTable(
conn,
name,
value,
overwrite = FALSE,
append = FALSE,
row.names = NA,
field.types = NULL,
partition = NULL,
s3.location = NULL,
file.type = c("tsv", "csv", "parquet", "json"),
compress = FALSE,
max.batch = Inf,
...
)
## S4 method for signature 'AthenaConnection,Id,data.frame'
dbWriteTable(
conn,
name,
value,
overwrite = FALSE,
append = FALSE,
row.names = NA,
field.types = NULL,
partition = NULL,
s3.location = NULL,
file.type = c("tsv", "csv", "parquet", "json"),
compress = FALSE,
max.batch = Inf,
...
)
## S4 method for signature 'AthenaConnection,SQL,data.frame'
dbWriteTable(
conn,
name,
value,
overwrite = FALSE,
append = FALSE,
row.names = NA,
field.types = NULL,
partition = NULL,
s3.location = NULL,
file.type = c("tsv", "csv", "parquet", "json"),
compress = FALSE,
max.batch = Inf,
...
)
Arguments
conn |
An |
name |
A character string specifying a table name. Names will be automatically quoted so you can use any sequence of characters, not just any valid bare table name. |
value |
A data.frame to write to the database. |
overwrite |
Allows overwriting the destination table. Cannot be |
append |
Allow appending to the destination table. Cannot be
|
row.names |
Either If A string is equivalent to For backward compatibility, |
field.types |
Additional field types used to override derived types. |
partition |
Partition Athena table (needs to be a named list or vector) for example: |
s3.location |
s3 bucket to store Athena table, must be set as a s3 uri for example ("s3://mybucket/data/").
By default, the s3.location is set to s3 staging directory from |
file.type |
What file type to store data.frame on s3, noctua currently supports |
compress |
|
max.batch |
Split the data frame by max number of rows i.e. 100,000 so that multiple files can be uploaded into AWS S3. By default when compression
is set to |
... |
Other arguments used by individual methods. |
Value
dbWriteTable()
returns TRUE
, invisibly. If the table exists, and both append and overwrite
arguments are unset, or append = TRUE and the data frame with the new data has different column names,
an error is raised; the remote table remains unchanged.
See Also
Examples
## Not run:
# Note:
# - Require AWS Account to run below example.
# - Different connection methods can be used please see `noctua::dbConnect` documnentation
library(DBI)
# Demo connection to Athena using profile name
con <- dbConnect(noctua::athena())
# List existing tables in Athena
dbListTables(con)
# Write data.frame to Athena table
dbWriteTable(con, "mtcars", mtcars,
partition = c("TIMESTAMP" = format(Sys.Date(), "%Y%m%d")),
s3.location = "s3://mybucket/data/"
)
# Read entire table from Athena
dbReadTable(con, "mtcars")
# List all tables in Athena after uploading new table to Athena
dbListTables(con)
# Checking if uploaded table exists in Athena
dbExistsTable(con, "mtcars")
# using default s3.location
dbWriteTable(con, "iris", iris)
# Read entire table from Athena
dbReadTable(con, "iris")
# List all tables in Athena after uploading new table to Athena
dbListTables(con)
# Checking if uploaded table exists in Athena
dbExistsTable(con, "iris")
# Disconnect from Athena
dbDisconnect(con)
## End(Not run)
Assume AWS ARN Role
Description
Returns a set of temporary security credentials that you can use to access AWS resources that you might not normally have access to (link). These temporary credentials consist of an access key ID, a secret access key, and a security token. Typically, you use AssumeRole within your account or for cross-account access.
Usage
assume_role(
profile_name = NULL,
region_name = NULL,
role_arn = NULL,
role_session_name = sprintf("noctua-session-%s", as.integer(Sys.time())),
duration_seconds = 3600L,
set_env = FALSE
)
Arguments
profile_name |
The name of a profile to use. If not given, then the default profile is used. To set profile name, the AWS Command Line Interface (AWS CLI) will need to be configured. To configure AWS CLI please refer to: Configuring the AWS CLI. |
region_name |
Default region when creating new connections. Please refer to link for
AWS region codes (region code example: Region = EU (Ireland) |
role_arn |
The Amazon Resource Name (ARN) of the role to assume (such as |
role_session_name |
An identifier for the assumed role session. By default |
duration_seconds |
The duration, in seconds, of the role session. The value can range from 900 seconds (15 minutes) up to the maximum session duration setting for the role. This setting can have a value from 1 hour to 12 hours. By default duration is set to 3600 seconds (1 hour). |
set_env |
If set to |
Value
assume_role()
returns a list containing: "AccessKeyId"
, "SecretAccessKey"
, "SessionToken"
and "Expiration"
See Also
Examples
## Not run:
# Note:
# - Require AWS Account to run below example.
library(noctua)
library(DBI)
# Assuming demo ARN role
assume_role(
profile_name = "YOUR_PROFILE_NAME",
role_arn = "arn:aws:sts::123456789012:assumed-role/role_name/role_session_name",
set_env = TRUE
)
# Connect to Athena using ARN Role
con <- dbConnect(noctua::athena())
## End(Not run)
Athena Driver
Description
Driver for an Athena paws connection.
Usage
athena()
Value
athena()
returns a s4 class. This class is used active Athena method for DBI::dbConnect
See Also
Athena S3 implementation of dbplyr backend functions
Description
These functions are used to build the different types of SQL queries. The AWS Athena implementation give extra parameters to allow access the to standard DBI Athena methods. They also utilise AWS Glue to speed up sql query execution.
Usage
sql_query_explain.AthenaConnection(con, sql, format = "text", type = NULL, ...)
sql_query_fields.AthenaConnection(con, sql, ...)
sql_escape_date.AthenaConnection(con, x)
sql_escape_datetime.AthenaConnection(con, x)
Arguments
con |
A dbConnect object, as returned by |
sql |
SQL code to be sent to AWS Athena |
format |
returning format for explain queries, default set to |
type |
return plan for explain queries, default set to |
... |
other parameters, currently not implemented |
x |
R object to be transformed into athena equivalent |
Value
- sql_query_explain
Returns sql query for AWS Athena explain statement
- sql_query_fields
Returns sql query column names
- sql_escape_date
Returns sql escaping from dates
- sql_escape_datetime
Returns sql escaping from date times
Athena S3 implementation of dbplyr backend functions (api version 1).
Description
These functions are used to build the different types of SQL queries. The AWS Athena implementation give extra parameters to allow access the to standard DBI Athena methods. They also utilise AWS Glue to speed up sql query execution.
Usage
db_explain.AthenaConnection(con, sql, ...)
db_query_fields.AthenaConnection(con, sql, ...)
Arguments
con |
A dbConnect object, as returned by |
sql |
SQL code to be sent to AWS Athena |
... |
other parameters, currently not implemented |
Value
- db_explain
Returns AWS Athena explain statement
- db_query_fields
Returns sql query column names
Connect to Athena using R's sdk paws
Description
It is never advised to hard-code credentials when making a connection to Athena (even though the option is there). Instead it is advised to use
profile_name
(set up by AWS Command Line Interface),
Amazon Resource Name roles or environmental variables. Here is a list
of supported environment variables:
AWS_ACCESS_KEY_ID: is equivalent to the
dbConnect
parameter -aws_access_key_id
AWS_SECRET_ACCESS_KEY: is equivalent to the
dbConnect
parameter -aws_secret_access_key
AWS_SESSION_TOKEN: is equivalent to the
dbConnect
parameter -aws_session_token
AWS_EXPIRATION: is equivalent to the
dbConnect
parameter -duration_seconds
AWS_ATHENA_S3_STAGING_DIR: is equivalent to the
dbConnect
parameter -s3_staging_dir
AWS_ATHENA_WORK_GROUP: is equivalent to
dbConnect
parameter -work_group
AWS_REGION: is equivalent to
dbConnect
parameter -region_name
NOTE: If you have set any environmental variables in .Renviron
please restart your R in order for the changes to take affect.
Usage
## S4 method for signature 'AthenaDriver'
dbConnect(
drv,
aws_access_key_id = NULL,
aws_secret_access_key = NULL,
aws_session_token = NULL,
catalog_name = "AwsDataCatalog",
schema_name = "default",
work_group = NULL,
poll_interval = NULL,
encryption_option = c("NULL", "SSE_S3", "SSE_KMS", "CSE_KMS"),
kms_key = NULL,
profile_name = NULL,
role_arn = NULL,
role_session_name = sprintf("noctua-session-%s", as.integer(Sys.time())),
duration_seconds = 3600L,
s3_staging_dir = NULL,
region_name = NULL,
bigint = c("integer64", "integer", "numeric", "character"),
binary = c("raw", "character"),
json = c("auto", "character"),
timezone = "UTC",
keyboard_interrupt = TRUE,
rstudio_conn_tab = TRUE,
endpoint_override = NULL,
...
)
Arguments
drv |
A object inheriting from DBI::DBIDriver. |
aws_access_key_id |
AWS access key ID |
aws_secret_access_key |
AWS secret access key |
aws_session_token |
AWS temporary session token |
catalog_name |
The catalog_name to which the connection belongs |
schema_name |
The schema_name to which the connection belongs |
work_group |
The name of the work group to run Athena queries , Currently defaulted to |
poll_interval |
Amount of time took when checking query execution status. Default set to a random interval between 0.5 - 1 seconds. |
encryption_option |
Athena encryption at rest link.
Supported Amazon S3 Encryption Options |
kms_key |
AWS Key Management Service, please refer to link for more information around the concept. |
profile_name |
The name of a profile to use. If not given, then the default profile is used. To set profile name, the AWS Command Line Interface (AWS CLI) will need to be configured. To configure AWS CLI please refer to: Configuring the AWS CLI. |
role_arn |
The Amazon Resource Name (ARN) of the role to assume (such as |
role_session_name |
An identifier for the assumed role session. By default |
duration_seconds |
The duration, in seconds, of the role session. The value can range from 900 seconds (15 minutes) up to the maximum session duration setting for the role. This setting can have a value from 1 hour to 12 hours. By default duration is set to 3600 seconds (1 hour). |
s3_staging_dir |
The location in Amazon S3 where your query results are stored, such as |
region_name |
Default region when creating new connections. Please refer to link for
AWS region codes (region code example: Region = EU (Ireland) |
bigint |
The R type that 64-bit integer types should be mapped to, default is bit64::integer64, which allows the full range of 64 bit integers. |
binary |
The R type that |
json |
Attempt to converts AWS Athena data types arrays, json using |
timezone |
Sets the timezone for the connection. The default is |
keyboard_interrupt |
Stops AWS Athena process when R gets a keyboard interrupt, currently defaults to |
rstudio_conn_tab |
Optional to get AWS Athena Schema from AWS Glue Catalogue and display it in RStudio's Connections Tab.
Default set to |
endpoint_override |
(character/list) The complete URL to use for the constructed client.
Normally, paws will automatically construct the appropriate URL to use when
communicating with a service. You can specify a complete URL (including the "http/https" scheme)
to override this behaviour. If this value is provided, then |
... |
other parameters for
|
Value
dbConnect()
returns DBI::DBIConnection. This object is used to communicate with AWS Athena.
See Also
Examples
# Connect to Athena using your aws access keys
## Not run:
library(DBI)
con <- dbConnect(noctua::athena(),
aws_access_key_id = "YOUR_ACCESS_KEY_ID", #
aws_secret_access_key = "YOUR_SECRET_ACCESS_KEY",
s3_staging_dir = "s3://path/to/query/bucket/",
region_name = "us-west-2"
)
dbDisconnect(con)
# Connect to Athena using your profile name
# Profile name can be created by using AWS CLI
con <- dbConnect(noctua::athena(),
profile_name = "YOUR_PROFILE_NAME",
s3_staging_dir = "s3://path/to/query/bucket/"
)
dbDisconnect(con)
# Connect to Athena using ARN role
con <- dbConnect(noctua::athena(),
profile_name = "YOUR_PROFILE_NAME",
role_arn = "arn:aws:sts::123456789012:assumed-role/role_name/role_session_name",
s3_staging_dir = "s3://path/to/query/bucket/"
)
dbDisconnect(con)
## End(Not run)
dbConvertTable aws s3 backend file types.
Description
Utilises AWS Athena to convert AWS S3 backend file types. It also also to create more efficient file types i.e. "parquet" and "orc" from SQL queries.
Usage
dbConvertTable(conn, obj, name, ...)
## S4 method for signature 'AthenaConnection'
dbConvertTable(
conn,
obj,
name,
partition = NULL,
s3.location = NULL,
file.type = c("NULL", "csv", "tsv", "parquet", "json", "orc"),
compress = TRUE,
data = TRUE,
...
)
Arguments
conn |
A DBI::DBIConnection object, |
obj |
Athena table or |
name |
Name of destination table |
... |
Extra parameters, currently not used |
partition |
Partition Athena table |
s3.location |
location to store output file, must be in s3 uri format for example ("s3://mybucket/data/"). |
file.type |
File type for |
compress |
Compress |
data |
If |
Value
dbConvertTable()
returns TRUE
but invisible.
List Athena Tables
Description
Returns the unquoted names of Athena tables accessible through this connection.
Usage
## S4 method for signature 'AthenaConnection'
dbListTables(conn, catalog = NULL, schema = NULL, ...)
Arguments
conn |
A DBI::DBIConnection object, as returned by dbConnect(). |
catalog |
Athena catalog, default set to NULL to return all tables from all Athena catalogs |
schema |
Athena schema, default set to NULL to return all tables from all Athena schemas. Note: The use of DATABASE and SCHEMA is interchangeable within Athena. |
... |
Other parameters passed on to methods. |
Value
dbListTables()
returns a character vector with all the tables from Athena.
See Also
Examples
## Not run:
# Note:
# - Require AWS Account to run below example.
# - Different connection methods can be used please see `noctua::dbConnect` documnentation
library(DBI)
# Demo connection to Athena using profile name
con <- dbConnect(noctua::athena())
# Return list of tables in Athena
dbListTables(con)
# Disconnect conenction
dbDisconnect(con)
## End(Not run)
S3 implementation of db_compute
for Athena
Description
This is a backend function for dplyr's compute
function. Users won't be required to access and run this function.
Usage
db_compute.AthenaConnection(
con,
table,
sql,
...,
overwrite = FALSE,
temporary = FALSE,
unique_indexes = list(),
indexes = list(),
analyze = TRUE,
in_transaction = FALSE,
partition = NULL,
s3_location = NULL,
file_type = c("csv", "tsv", "parquet"),
compress = FALSE
)
sql_query_save.AthenaConnection(
con,
sql,
name,
temporary = TRUE,
...,
partition = NULL,
s3_location = NULL,
file_type = NULL,
compress = FALSE
)
Arguments
con |
A dbConnect object, as returned by |
table |
Table name, if left default noctua will use the default from |
sql |
SQL code to be sent to the data |
... |
passes |
overwrite |
Allows overwriting the destination table. Cannot be |
temporary |
if TRUE, will create a temporary table that is local to this connection and will be automatically deleted when the connection expires |
unique_indexes |
a list of character vectors. Each element of the list will create a new unique index over the specified column(s). Duplicate rows will result in failure. |
indexes |
a list of character vectors. Each element of the list will create a new index. |
analyze |
if TRUE (the default), will automatically ANALYZE the new table so that the query optimiser has useful information. |
in_transaction |
Should the table creation be wrapped in a transaction? This typically makes things faster, but you may want to suppress if the database doesn't support transactions, or you're wrapping in a transaction higher up (and your database doesn't support nested transactions.) |
partition |
Partition Athena table (needs to be a named list or vector) for example: |
s3_location |
s3 bucket to store Athena table, must be set as a s3 uri for example ("s3://mybucket/data/") |
file_type |
What file type to store data.frame on s3, noctua currently supports |
compress |
|
name |
Table name, if left default noctua will use the default from |
Value
db_compute
returns table name
See Also
Examples
## Not run:
# Note:
# - Require AWS Account to run below example.
# - Different connection methods can be used please see `noctua::dbConnect` documentation
library(DBI)
library(dplyr)
# Demo connection to Athena using profile name
con <- dbConnect(noctua::athena())
# Write data.frame to Athena table
copy_to(con, mtcars,
s3_location = "s3://mybucket/data/"
)
# Write Athena table from tbl_sql
athena_mtcars <- tbl(con, "mtcars")
mtcars_filter <- athena_mtcars %>% filter(gear >= 4)
# create athena with unique table name
mtcars_filer %>%
compute()
# create athena with specified name and s3 location
mtcars_filer %>%
compute("mtcars_filer",
s3_location = "s3://mybucket/mtcars_filer/"
)
# Disconnect from Athena
dbDisconnect(con)
## End(Not run)
S3 implementation of db_connection_describe
for Athena (api version 2).
Description
This is a backend function for dplyr to retrieve meta data about Athena queries. Users won't be required to access and run this function.
Usage
db_connection_describe.AthenaConnection(con)
Arguments
con |
A dbConnect object, as returned by |
Value
Character variable containing Meta Data about query sent to Athena. The Meta Data is returned in the following format:
"Athena <paws version> [<profile_name>@region/database]"
S3 implementation of db_copy_to
for Athena
Description
This is an Athena method for dbplyr function db_copy_to
to create an Athena table from a data.frame
.
Usage
db_copy_to.AthenaConnection(
con,
table,
values,
...,
partition = NULL,
s3_location = NULL,
file_type = c("csv", "tsv", "parquet"),
compress = FALSE,
max_batch = Inf,
overwrite = FALSE,
append = FALSE,
types = NULL,
temporary = TRUE,
unique_indexes = NULL,
indexes = NULL,
analyze = TRUE,
in_transaction = FALSE
)
Arguments
con |
A dbConnect object, as returned by |
table |
A character string specifying a table name. Names will be automatically quoted so you can use any sequence of characters, not just any valid bare table name. |
values |
A data.frame to write to the database. |
... |
other parameters currently not supported in noctua |
partition |
Partition Athena table (needs to be a named list or vector) for example: |
s3_location |
s3 bucket to store Athena table, must be set as a s3 uri for example ("s3://mybucket/data/") |
file_type |
What file type to store data.frame on s3, noctua currently supports |
compress |
|
max_batch |
Split the data frame by max number of rows i.e. 100,000 so that multiple files can be uploaded into AWS S3. By default when compression
is set to |
overwrite |
Allows overwriting the destination table. Cannot be |
append |
Allow appending to the destination table. Cannot be |
types |
Additional field types used to override derived types. |
temporary |
if TRUE, will create a temporary table that is local to this connection and will be automatically deleted when the connection expires |
unique_indexes |
a list of character vectors. Each element of the list will create a new unique index over the specified column(s). Duplicate rows will result in failure. |
indexes |
a list of character vectors. Each element of the list will create a new index. |
analyze |
if TRUE (the default), will automatically ANALYZE the new table so that the query optimiser has useful information. |
in_transaction |
Should the table creation be wrapped in a transaction? This typically makes things faster, but you may want to suppress if the database doesn't support transactions, or you're wrapping in a transaction higher up (and your database doesn't support nested transactions.) |
Value
db_copy_to returns table name
See Also
Examples
## Not run:
# Note:
# - Require AWS Account to run below example.
# - Different connection methods can be used please see `noctua::dbConnect` documnentation
library(DBI)
library(dplyr)
# Demo connection to Athena using profile name
con <- dbConnect(noctua::athena())
# List existing tables in Athena
dbListTables(con)
# Write data.frame to Athena table
copy_to(con, mtcars,
s3_location = "s3://mybucket/data/"
)
# Checking if uploaded table exists in Athena
dbExistsTable(con, "mtcars")
# Write Athena table from tbl_sql
athena_mtcars <- tbl(con, "mtcars")
mtcars_filter <- athena_mtcars %>% filter(gear >= 4)
copy_to(con, mtcars_filter)
# Checking if uploaded table exists in Athena
dbExistsTable(con, "mtcars_filter")
# Disconnect from Athena
dbDisconnect(con)
## End(Not run)
S3 implementation of db_desc
for Athena (api version 1).
Description
This is a backend function for dplyr to retrieve meta data about Athena queries. Users won't be required to access and run this function.
Usage
db_desc.AthenaConnection(x)
Arguments
x |
A dbConnect object, as returned by |
Value
Character variable containing Meta Data about query sent to Athena. The Meta Data is returned in the following format:
"Athena <paws version> [<profile_name>@region/database]"
Declare which version of dbplyr API is being called.
Description
Declare which version of dbplyr API is being called.
Usage
dbplyr_edition.AthenaConnection(con)
Arguments
con |
A dbConnect object, as returned by |
Value
Integer for which version of dbplyr
is going to be used.
A method to configure noctua backend options.
Description
noctua_options()
provides a method to change the backend. This includes changing the file parser,
whether noctua
should cache query ids locally and number of retries on a failed api call.
Usage
noctua_options(
file_parser,
bigint,
binary,
json,
cache_size,
clear_cache,
retry,
retry_quiet,
unload,
clear_s3_resource,
verbose
)
Arguments
file_parser |
Method to read and write tables to Athena, currently default to |
bigint |
The R type that 64-bit integer types should be mapped to (default: |
binary |
The R type that |
json |
Attempt to converts AWS Athena data types |
cache_size |
Number of queries to be cached. Currently only support caching up to 100 distinct queries (default: |
clear_cache |
Clears all previous cached query metadata |
retry |
Maximum number of requests to attempt (default: |
retry_quiet |
This method is deprecated please use verbose instead. |
unload |
set AWS Athena unload functionality globally (default: |
clear_s3_resource |
Clear down |
verbose |
print package info messages (default: |
Value
noctua_options()
returns the list
of athena option environment invisibly.
Examples
library(noctua)
# change file parser from default data.table to vroom
noctua_options("vroom")
# cache queries locally
noctua_options(cache_size = 5)
Get Session Tokens for PAWS Connection
Description
Returns a set of temporary credentials for an AWS account or IAM user (link).
Usage
get_session_token(
profile_name = NULL,
region_name = NULL,
serial_number = NULL,
token_code = NULL,
duration_seconds = 3600L,
set_env = FALSE
)
Arguments
profile_name |
The name of a profile to use. If not given, then the default profile is used. To set profile name, the AWS Command Line Interface (AWS CLI) will need to be configured. To configure AWS CLI please refer to: Configuring the AWS CLI. |
region_name |
Default region when creating new connections. Please refer to link for
AWS region codes (region code example: Region = EU (Ireland) |
serial_number |
The identification number of the MFA device that is associated with the IAM user who is making the GetSessionToken call.
Specify this value if the IAM user has a policy that requires MFA authentication. The value is either the serial number for a hardware device
(such as |
token_code |
The value provided by the MFA device, if MFA is required. If any policy requires the IAM user to submit an MFA code, specify this value. If MFA authentication is required, the user must provide a code when requesting a set of temporary security credentials. A user who fails to provide the code receives an "access denied" response when requesting resources that require MFA authentication. |
duration_seconds |
The duration, in seconds, that the credentials should remain valid. Acceptable duration for IAM user sessions range from 900 seconds (15 minutes) to 129,600 seconds (36 hours), with 3,600 seconds (1 hour) as the default. |
set_env |
If set to |
Value
get_session_token()
returns a list containing: "AccessKeyId"
, "SecretAccessKey"
, "SessionToken"
and "Expiration"
Examples
## Not run:
# Note:
# - Require AWS Account to run below example.
library(noctua)
library(DBI)
# Create Temporary Credentials duration 1 hour
get_session_token("YOUR_PROFILE_NAME",
serial_number = "arn:aws:iam::123456789012:mfa/user",
token_code = "531602",
set_env = TRUE
)
# Connect to Athena using temporary credentials
con <- dbConnect(athena())
## End(Not run)
Creates query to create a simple Athena table
Description
Creates an interface to compose CREATE EXTERNAL TABLE
.
Usage
## S4 method for signature 'AthenaConnection'
sqlCreateTable(
con,
table,
fields,
field.types = NULL,
partition = NULL,
s3.location = NULL,
file.type = c("tsv", "csv", "parquet", "json"),
compress = FALSE,
...
)
Arguments
con |
A database connection. |
table |
The table name, passed on to
|
fields |
Either a character vector or a data frame. A named character vector: Names are column names, values are types.
Names are escaped with A data frame: field types are generated using
|
field.types |
Additional field types used to override derived types. |
partition |
Partition Athena table (needs to be a named list or vector) for example: |
s3.location |
s3 bucket to store Athena table, must be set as a s3 uri for example ("s3://mybucket/data/").
By default s3.location is set s3 staging directory from |
file.type |
What file type to store data.frame on s3, noctua currently supports |
compress |
|
... |
Other arguments used by individual methods. |
Value
sqlCreateTable
returns data.frame's DDL
in the SQL
format.
See Also
Examples
## Not run:
# Note:
# - Require AWS Account to run below example.
# - Different connection methods can be used please see `noctua::dbConnect` documnentation
library(DBI)
# Demo connection to Athena using profile name
con <- dbConnect(noctua::athena())
# Create DDL for iris data.frame
sqlCreateTable(con, "iris", iris, s3.location = "s3://path/to/athena/table")
# Create DDL for iris data.frame with partition
sqlCreateTable(con, "iris", iris,
partition = "timestamp",
s3.location = "s3://path/to/athena/table"
)
# Create DDL for iris data.frame with partition and file.type parquet
sqlCreateTable(con, "iris", iris,
partition = "timestamp",
s3.location = "s3://path/to/athena/table",
file.type = "parquet"
)
# Disconnect from Athena
dbDisconnect(con)
## End(Not run)
Converts data frame into suitable format to be uploaded to Athena
Description
This method converts data.frame columns into the correct format so that it can be uploaded Athena.
Usage
## S4 method for signature 'AthenaConnection'
sqlData(
con,
value,
row.names = NA,
file.type = c("tsv", "csv", "parquet", "json"),
...
)
Arguments
con |
A database connection. |
value |
A data frame |
row.names |
Either If A string is equivalent to For backward compatibility, |
file.type |
What file type to store data.frame on s3, noctua currently supports |
... |
Other arguments used by individual methods. |
Value
sqlData
returns a dataframe formatted for Athena. Currently converts list
variable types into character
split by '|'
, similar to how data.table
writes out to files.
See Also
AWS Athena backend dbplyr version 1 and 2
Description
Create s3 implementation of sql_translate_env
for AWS Athena sql translate environment based off
Athena Data Types and
DML Queries, Functions, and Operators
Usage
sql_translation.AthenaConnection(con)
sql_translate_env.AthenaConnection(con)
Arguments
con |
An |
Athena Work Groups
Description
Lower level API access, allows user to create and delete Athena Work Groups.
- create_work_group
Creates a workgroup with the specified name (link). The work group utilises parameters from the
dbConnect
object, to determine the encryption and output location of the work group. The s3_staging_dir, encryption_option and kms_key parameters are gotten from dbConnect- tag_options
Helper function to create tag options for function
create_work_group()
- delete_work_group
Deletes the workgroup with the specified name (link). The primary workgroup cannot be deleted.
- list_work_groups
Lists available workgroups for the account (link).
- get_work_group
Returns information about the workgroup with the specified name (link).
- update_work_group
Updates the workgroup with the specified name (link). The workgroup's name cannot be changed. The work group utilises parameters from the
dbConnect
object, to determine the encryption and output location of the work group. The s3_staging_dir, encryption_option and kms_key parameters are gotten from dbConnect
Usage
create_work_group(
conn,
work_group = NULL,
enforce_work_group_config = FALSE,
publish_cloud_watch_metrics = FALSE,
bytes_scanned_cut_off = 10000000L,
description = NULL,
tags = tag_options(key = NULL, value = NULL)
)
tag_options(key = NULL, value = NULL)
delete_work_group(conn, work_group = NULL, recursive_delete_option = FALSE)
list_work_groups(conn)
get_work_group(conn, work_group = NULL)
update_work_group(
conn,
work_group = NULL,
remove_output_location = FALSE,
enforce_work_group_config = FALSE,
publish_cloud_watch_metrics = FALSE,
bytes_scanned_cut_off = 10000000L,
description = NULL,
state = c("ENABLED", "DISABLED"),
engine_version = list()
)
Arguments
conn |
A dbConnect object, as returned by |
work_group |
The Athena workgroup name. |
enforce_work_group_config |
If set to |
publish_cloud_watch_metrics |
Indicates that the Amazon CloudWatch metrics are enabled for the workgroup. |
bytes_scanned_cut_off |
The upper data usage limit (cutoff) for the amount of bytes a single query in a workgroup is allowed to scan. |
description |
The workgroup description. |
tags |
A tag that you can add to a resource. A tag is a label that you assign to an AWS Athena resource (a workgroup).
Each tag consists of a key and an optional value, both of which you define. Tags enable you to categorize workgroups in Athena, for example,
by purpose, owner, or environment. Use a consistent set of tag keys to make it easier to search and filter workgroups in your account.
The maximum tag key length is 128 Unicode characters in UTF-8. The maximum tag value length is 256 Unicode characters in UTF-8.
You can use letters and numbers representable in UTF-8, and the following characters: |
key |
A tag key. The tag key length is from 1 to 128 Unicode characters in UTF-8. You can use letters and numbers representable in UTF-8, and the following characters: |
value |
A tag value. The tag value length is from 0 to 256 Unicode characters in UTF-8. You can use letters and numbers representable in UTF-8, and the following characters: |
recursive_delete_option |
The option to delete the workgroup and its contents even if the workgroup contains any named queries |
remove_output_location |
If set to |
state |
The workgroup state that will be updated for the given workgroup. |
engine_version |
The engine version requested when a workgroup is updated.
|
Value
- create_work_group
Returns
NULL
but invisible- tag_options
Returns
list
but invisible- delete_work_group
Returns
NULL
but invisible- list_work_groups
Returns list of available work groups
- get_work_group
Returns list of work group meta data
- update_work_group
Returns
NULL
but invisible
Examples
## Not run:
# Note:
# - Require AWS Account to run below example.
# - Different connection methods can be used please see `noctua::dbConnect` documnentation
library(noctua)
# Demo connection to Athena using profile name
con <- dbConnect(noctua::athena())
# List current work group available
list_work_groups(con)
# Create a new work group
wg <- create_work_group(con,
"demo_work_group",
description = "This is a demo work group",
tags = tag_options(key = "demo_work_group", value = "demo_01")
)
# List work groups to see new work group
list_work_groups(con)
# get meta data from work group
wg <- get_work_group(con, "demo_work_group")
# Update work group
wg <- update_work_group(con, "demo_work_group",
description = "This is a demo work group update"
)
# get updated meta data from work group
wg <- get_work_group(con, "demo_work_group")
# Delete work group
delete_work_group(con, "demo_work_group")
# Disconect from Athena
dbDisconnect(con)
## End(Not run)