The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.
tcn_threads was
retrieved individually as the initializing tweet, the
conversation_id extracted and then used as a search filter
to collect a single conversation thread. This was a very inefficient
process for large numbers of threads as it used one API request per
thread, soon reaching the full-archive search endpoint 300
request per 15 minute rate-limit (and imposing a ceiling of maximum 300
threads per 15 minutes).tcn_threads are now collected at the
start of an operation in batch and unique conversation_id
extracted from the tweets to do batch thread collection. An
academic or full-archive enabled project has a
search query character limit of 1024 characters, and projects limited to
the recent search endpoint a 512 character limit. A search
query for multiple conversation_id can be created in the
following format:
conversation_id:xxxxxxxxxxxxxxxxxxx OR conversation_id:xxxxxxxxxxxxxxxxxxx OR ...
within the character limit.tcn_threads function now searches for as many
conversation_id per request that will fit into the search
query character limit, significantly improving the thread collection
time. This is approximately 26 ID’s per full-archive and 13
per recent search request. So for example, a maximum of 26
threads can be collected in a single request assuming the sum of tweets
from all threads is less than max_results. This translates
to 7,800 threads per 15 minutes (300 * 26) for the
full-archive endpoint if each request of 26 threads has
less than 500 tweets.max_results to again be
500 tweets for full-archive searches. This was
previously set to 100 tweets, the same as the
max_results for a recent endpoint search
because of an issue requesting the context_annotations
tweet data field (Twitter impose a 100 tweet max for requests that
include this field). The context_annotations is no longer
collected by default but can be included by using the
annotations = TRUE parameter.conversation_id queries.4.1 or greater.tcn_tweets to be able to accept any number of
tweet ids or urls. Previously was set to a maximum of 100.referenced_tweets parameter to
tcn_tweets function so as to now have the option to specify
if referenced tweets should also be retrieved. The default value is
FALSE.retry_on_limit parameter to
tcn_tweets and tcn_counts functions.warning messages to use the
message function instead.magrittr import and replaced pipes with native
R version.tcn_counts will now also accept tweet urls.tcn_threads.tcn_counts function for retrieving tweet
counts over time for conversation ids. This uses the API v2
tweets/counts endpoint and does not contribute to the
monthly tweet cap. It could be used prior to collecting conversation
threads to identify conversations to target or conversations to add to a
skip list.max_results querystring parameter have started to return
response status code 400 for values over 100.max_results parameter for function
tcn_threads has been changed from the maximum total
results, and now refers to and allows the API parameter of the same name
above to be set. The default value has been set to 100. If left at
default academic projects using the full-archive search endpoint will
only collect 30,000 tweets per 15 minute rate-limit.max_results parameter for the function
tcn_threads has been renamed to
max_total.retry_on_limit parameter to
tcn_threads to allow waiting for API rate-limit to reset
before continuing, rather than exiting upon reaching the limit.tcn_tweets function. This function accepts tweet
URL’s or ID’s and collects specific tweet data using the API v2
tweets endpoint. Currently supports only bearer access
tokens.httr request user-agent string via header field
rather than options.converstion_id as node and edge attribute to
networks.tweets, users, errors and
meta. The errors dataframe contains partial
errors such as those caused when trying to retrieve tweets that are not
publicly available. The meta dataframe contains search
results metadata such as tweet id range, number of results and
pagination token.public_metrics during the JSON to
dataframe process.end-point parameter to endpoint to
be consistent with twitter documentation.max_results parameter to coarsely limit how many
tweets are collected in a tcn_threads operation. This is to
assist with managing the monthly tweet cap placed on projects using the
API search endpoints.get_tweets from returning requested tweets.object 'df_convo' not found message when endpoint
related error occurs.start_time and end_time parameters
for academic track historical search endpoint
(end_point = "all"). These are UTC datetime strings in ISO
8601 format. If unused the API uses a default UTC start time of 30 days
ago and a default end time of the current time minus 30 seconds.anti_join on tweet_id error.tcn_threads function now produces a named list of
dataframes: tweets and users for tweet data
and user metadata.bearer parameter to the tcn_token
function to assign a bearer token directly rather than retrieving it
with app keys.tcn_token function.tcn_threads using the
tweet_ids parameter.tcn_threads with the end_point = "all"
parameter.actor or activity
network using the tcn_network function with the
type = "activity" parameter.These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.