Packaging the data in the container

parent 9de9a3b6
Pipeline #16470 failed with stage
in 18 seconds
......@@ -7,16 +7,8 @@ RUN apt-get update && apt-get install -y libcurl4-openssl-dev \
ADD ./etc/ /etc/shiny-server/
ADD ./app/ /srv/shiny-server/
ADD ./data /srv/shiny-server/
COPY ./packages.R packages.R
#COPY ./aws.s3-master /tmp/aws.s3-master
RUN Rscript packages.R
#EXPOSE 3838
# allow permission
#RUN sudo chown -R shiny:shiny /srv/shiny-server
# run app
#CMD ["/usr/bin/shiny-server.sh"]
# Image for https://seuratapp.shiny.embl.de/
Data is packed into the app
## Building and pushing the image
It is done through the .gitlab-ci.yml file in this repo
......@@ -10,7 +10,6 @@
library(shiny)
require(ggplot2)
require(aws.s3)
# Define UI for application that draws a histogram
ui <- fluidPage(
......@@ -36,10 +35,7 @@ ui <- fluidPage(
server <- function(input, output, session) {
query <- parseQueryString(isolate(session$clientData$url_search))
#query <- c("dataset" = "Tcells.rda")
s3con <- s3connection(paste0("seuratapp/",query["dataset"]),bucket = "lsteinme-shiny",region="")
load(s3con)
close(s3con)
load(file.path("data",query["dataset"]))
updateSelectizeInput(session, "tsne.gene",choices = rownames(scaled))
updateSelectizeInput(session, "tsne.type",choices = c("Clusters","Gene",colnames(metadata)))
......
.travis.yml
CONTRIBUTING.md
README.Rmd
Makefile
drat.sh
knitreadme.sh
^revdep$
^man-roxygen$
^.*\.Rproj$
^\.Rproj\.user$
^\.github.?
Contributions to **aws.s3** are welcome from anyone and are best sent as pull requests on [the GitHub repository](https://github.com/cloudyr/aws.s3/). This page provides some instructions to potential contributors about how to add to the package.
1. Contributions can be submitted as [a pull request](https://help.github.com/articles/creating-a-pull-request/) on GitHub by forking or cloning the [repo](https://github.com/cloudyr/aws.s3/), making changes and submitting the pull request.
2. The cloudyr project follows [a consistent style guide](http://cloudyr.github.io/styleguide/index.html) across all of its packages. Please refer to this when editing package code.
3. Pull requests should involve only one commit per substantive change. This means if you change multiple files (e.g., code and documentation), these changes should be committed together. If you don't know how to do this (e.g., you are making changes in the GitHub web interface) just submit anyway and the maintainer will clean things up.
4. All contributions must be submitted consistent with the package license ([GPL-2](http://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html)).
5. Non-trivial contributions need to be noted in the `Authors@R` field in the [DESCRIPTION](https://github.com/cloudyr/aws.s3/blob/master/DESCRIPTION). Just follow the format of the existing entries to add your name (and, optionally, email address). Substantial contributions should also be noted in [`inst/CITATION`](https://github.com/cloudyr/aws.s3/blob/master/inst/CITATION).
6. The cloudyr project use royxgen code and documentation markup, so changes should be made to roxygen comments in the source code `.R` files. If changes are made, roxygen needs to be run. The easiest way to do this is a command line call to: `Rscript -e devtools::document()`. Please resolve any roxygen errors before submitting a pull request.
7. Please run `R CMD BUILD aws.s3` and `R CMD CHECK aws.s3_VERSION.tar.gz` before submitting the pull request to check for any errors.
Some specific types of changes that you might make are:
1. Bug fixes. Great!
2. Documentation-only changes (e.g., to Rd files, README, vignettes). This is great! All contributions are welcome.
3. New functionality. This is fine, but should be discussed on [the GitHub issues page](https://github.com/cloudyr/aws.s3/issues) before submitting a pull request.
3. Changes requiring a new package dependency should also be discussed on [the GitHub issues page](https://github.com/cloudyr/aws.s3/issues) before submitting a pull request.
4. Message translations. These are very appreciated! The format is a pain, but if you're doing this I'm assuming you're already familiar with it.
Any questions you have can be opened as GitHub issues or directed to thosjleeper (at) gmail.com.
Before filing an issue, please make sure you are using the latest *development* version which you can install using `install.packages("aws.s3",repo="https://rforge.net")` (see README) since the issue may have been fixed already. Also search existing issues first to avoid duplicates.
Please specify whether your issue is about:
- [ ] a possible bug
- [ ] a question about package functionality
- [ ] a suggested code or documentation change, improvement to the code, or feature request
If you are reporting (1) a bug or (2) a question about code, please supply:
- [a fully reproducible example](http://stackoverflow.com/questions/5963269/how-to-make-a-great-r-reproducible-example) using a publicly available dataset (or provide your data)
- if an error is occurring, include the output of `traceback()` run immediately after the error occurs
- the output of `sessionInfo()`
Put your code here:
```R
## load package
library("aws.s3")
## code goes here
## session info for your system
sessionInfo()
```
Please ensure the following before submitting a PR:
- [ ] if suggesting code changes or improvements, [open an issue](https://github.com/cloudyr/aws.s3/issues/new) first
- [ ] for all but trivial changes (e.g., typo fixes), add your name to [DESCRIPTION](https://github.com/cloudyr/aws.s3/blob/master/DESCRIPTION)
- [ ] for all but trivial changes (e.g., typo fixes), documentation your change in [NEWS.md](https://github.com/cloudyr/aws.s3/blob/master/NEWS.md) with a parenthetical reference to the issue number being addressed
- [ ] if changing documentation, edit files in `/R` not `/man` and run `devtools::document()` to update documentation
- [ ] add code or new test files to [`/tests`](https://github.com/cloudyr/aws.s3/tree/master/tests/testthat) for any new functionality or bug fix
- [ ] make sure `R CMD check` runs without error before submitting the PR
.Renviron
.Rproj.user
.Rhistory
.RData
aws.s3.Rproj
revdep/
^revdep/.?$
language: r
sudo: false
cache: packages
matrix:
include:
- os: linux
dist: trusty
sudo: required
env: DRAT_DEPLOY=true
# Travis is currently broken for macos R so disable
# - os: osx
# osx_image: xcode9.2
r_packages:
- covr
- drat
r_github_packages:
- cloudyr/travisci
after_success:
- R -q -e 'library("covr");codecov()'
- test $TRAVIS_PULL_REQUEST == "false" && test $TRAVIS_BRANCH == "master" && test
$DRAT_DEPLOY == "true" && bash drat.sh
- R -q -e "travisci::restart_last_build('cloudyr/awspack')"
env:
global:
- secure: NkoBLN7g9Jjkn1kXfjFbiTmCRXgEFWAeRObLyiJZCA9FzeAN6pARvuV45Jkt8T3pRzma7MbOg1wN/FG7ywAXYYMeijIDUy5VxNZWxbmTOHVOeVBVW+H8qlrGxHF/KXHKKAcjnOnO2Z/rHrvBmXOJQbEXXUwPAlr7RNNxwasrFAY=
- secure: XT+piKb0x5Tt4Zac/h7K7IuO3/RtH+KatM7GjNKxUuMjCU8BMKon95e5AG3kCUfnKoEbeuPaV+zx2FwJXP72tVHmEfvVuxnPPH5rPQkEWfAd+THeTnVEqfi+64v4d/oDqY1nbdZ7H8PZNiRnMuWigsLjlmwkitXOpzNL2JPivZk=
- secure: o55+mZ+30+GkUiAjXeiCrKtuMSfpTzX119JS/TDH2E7Sdsnl7RN492K3VpzMQ6V4UjlP4nLMDSqqjvZlxFBUQwiSw9JJupbYjq8OKKN6vl46JJyyJMbt4Vr8C9XfCez5v2cShZt7d7ZRnSRUvl1+PfLVveEvzAo4XSgScO98CNk=
- secure: baEhpLdQX8Dw8waUg90CJtGOUiHpF+i2+bM+i+aUTkOVy8l2A/msHf1KLYokt3dbwqONNuMdULH3ktJgPlaJkmcald6vJCFVUu5SJxAoDYCMoDADtHYU4QukgU3S82ZiR2nKNzuxi2sQ+eV5oxZvB0usGxZpVkpxAztdE/p5imE=
Package: aws.s3
Type: Package
Title: 'AWS S3' Client Package
Version: 0.3.22
Authors@R: c(person("Thomas J.", "Leeper", role = "aut",
email = "thosjleeper@gmail.com",
comment = c(ORCID = "0000-0003-4097-6326")),
person("Boettiger", "Carl", role = "ctb"),
person("Andrew", "Martin", role = "ctb"),
person("Mark", "Thompson", role = "ctb"),
person("Tyler", "Hunt", role = "ctb"),
person("Steven", "Akins", role = "ctb"),
person("Bao", "Nguyen", role = "ctb"),
person("Thierry", "Onkelinx", role = "ctb"),
person("Andrii", "Degtiarov", role = "ctb"),
person("Dhruv", "Aggarwal", role = "ctb"),
person("Alyssa", "Columbus", role = "ctb"),
person("Simon", "Urbanek", role = c("cre", "ctb"),
email = "simon.urbanek@R-project.org")
)
Maintainer: Simon Urbanek <simon.urbanek@R-project.org>
Description: A simple client package for the Amazon Web Services ('AWS') Simple
Storage Service ('S3') 'REST' 'API' <https://aws.amazon.com/s3/>.
License: GPL (>= 2)
URL: https://github.com/cloudyr/aws.s3
BugReports: https://github.com/cloudyr/aws.s3/issues
Encoding: UTF-8
Imports:
utils,
tools,
curl,
httr,
xml2 (> 1.0.0),
base64enc,
digest,
aws.signature (>= 0.3.7)
Suggests:
testthat,
datasets
RoxygenNote: 7.1.0
pkg = $(shell basename $(CURDIR))
all: build
NAMESPACE: R/*
Rscript -e "devtools::document()"
README.md: README.Rmd
Rscript -e "knitr::knit('README.Rmd')"
README.html: README.md
pandoc -o README.html README.md
../$(pkg)*.tar.gz: DESCRIPTION NAMESPACE README.md
cd ../ && R CMD build $(pkg)
build: ../$(pkg)*.tar.gz
check: ../$(pkg)*.tar.gz
cd ../ && R CMD check $(pkg)*.tar.gz
rm ../$(pkg)*.tar.gz
install: ../$(pkg)*.tar.gz
cd ../ && R CMD INSTALL $(pkg)*.tar.gz
rm ../$(pkg)*.tar.gz
# Generated by roxygen2: do not edit by hand
S3method(as.data.frame,s3_bucket)
S3method(get_bucketname,character)
S3method(get_bucketname,s3_bucket)
S3method(get_bucketname,s3_object)
S3method(get_objectkey,character)
S3method(get_objectkey,s3_object)
S3method(print,aws_error)
S3method(print,s3_bucket)
S3method(print,s3_object)
export(bucket_exists)
export(bucket_list_df)
export(bucketexists)
export(bucketlist)
export(copy_bucket)
export(copy_object)
export(copybucket)
export(copyobject)
export(delete_bucket)
export(delete_bucket_policy)
export(delete_cors)
export(delete_encryption)
export(delete_lifecycle)
export(delete_object)
export(delete_replication)
export(delete_tagging)
export(delete_website)
export(deletebucket)
export(deleteobject)
export(get_acceleration)
export(get_acl)
export(get_bucket)
export(get_bucket_df)
export(get_bucket_policy)
export(get_bucketname)
export(get_cors)
export(get_encryption)
export(get_lifecycle)
export(get_location)
export(get_notification)
export(get_object)
export(get_objectkey)
export(get_replication)
export(get_requestpayment)
export(get_tagging)
export(get_torrent)
export(get_uploads)
export(get_versioning)
export(get_versions)
export(get_website)
export(getbucket)
export(getobject)
export(head_object)
export(headobject)
export(object_exists)
export(object_size)
export(put_acceleration)
export(put_acl)
export(put_bucket)
export(put_bucket_policy)
export(put_cors)
export(put_encryption)
export(put_folder)
export(put_lifecycle)
export(put_notification)
export(put_object)
export(put_replication)
export(put_requestpayment)
export(put_tagging)
export(put_versioning)
export(put_website)
export(putbucket)
export(putobject)
export(s3HTTP)
export(s3connection)
export(s3load)
export(s3readRDS)
export(s3read_using)
export(s3save)
export(s3saveRDS)
export(s3save_image)
export(s3source)
export(s3sync)
export(s3write_using)
export(save_object)
export(saveobject)
export(select_object)
import(aws.signature)
import(httr)
importFrom(base64enc,base64encode)
importFrom(curl,curl)
importFrom(curl,handle_setheaders)
importFrom(curl,new_handle)
importFrom(digest,digest)
importFrom(tools,file_ext)
importFrom(tools,md5sum)
importFrom(utils,URLencode)
importFrom(utils,head)
importFrom(utils,str)
importFrom(utils,tail)
importFrom(xml2,as_list)
importFrom(xml2,read_xml)
importFrom(xml2,write_xml)
importFrom(xml2,xml_add_child)
# aws.s3 0.3.22
## API changes
* `put_object` splits the use of files vs payload into two separate arguments: `what` is now the first positional argument and expects actual content to store, `file` is now a named argument used to store file content. The previous use of `file` for both was very dangerous as it stored filenames instead of content without warning if the file was not found.
Old code that intended to use files such as:
```r
put_object("foo.csv", "bucket")
```
has to use either of
```r
put_object(file="foo.csv", bucket="bucket")
## or (not recommended)
put_object(, "bucket", file="foo.csv")
```
Any code that used `file=` explicitly and no positional arguments doesn't need to change.
## Features
* `put_object` supports connections, including non-seekable ones
## Bugfixes
* `put_object` now closes its connections properly (#354)
# aws.s3 0.3.21
* `s3HTTP()` (and thus all API functions) gain `write_fn=function(x) {...}` argument which allows chunk-wise streaming output for `GET` requests.
* Replace `sprintf()` where possible to avoid type mismatches. (#329)
* Handle result of length one in `bucketlist()` correctly. (#333)
* Setting `region=""` and custom `base_url` enables the use of single-host non-AWS back-ends (e.g., [minio](https://github.com/minio/minio)). (#340)
* `s3read_using()` now cleans up after itself. (#270) It also gains a new argument `filename` which allows to specify the actual name of the file that will be used. (#341)
* Avoid invalid scientific notation in content sizes. (#299, h/t Martijn Schuemie)
* `s3sync` has been re-factored to work on directories instead of file lists. Please read the documentation, the arguments have changed. The previous version has never really worked for any other cases than sync of the working directory. Addresses many `s3sync()` issues including #346.
# aws.s3 0.3.20
* Add `acl` and `header` arguments to `put_acl()`, ala `put_object()`. (#137)
* Make sure content-length is an integer (#254)
# aws.s3 0.3.19
* `put_bucket()` gains a `location_constraint` argument, which - if NULL - does not pass a LocationConstraint body argument. This is useful for S3-compatible storage. (#189)
# aws.s3 0.3.18
* Allowed both virtual- and path-style URLs for S3-compatible storage and fixed region handling for S3-compatible URLs. (#189)
* Fixed a request signature bug in `put_bucket()` when `region = "us-east-1"`. (#243)
# aws.s3 0.3.17
* Added `s3connection()` function to stream objects from S3. (#217)
# aws.s3 0.3.16
* Refactored `put_object(multipart = TRUE)` to improve memory efficiency. (h/t Andrii Degtiarov, #242)
* Added provisional support for S3 SELECT via the `select_object()` function. (#224)
# aws.s3 0.3.14
* Fixed several bugs in `put_object(multipart = TRUE)`. (#80)
* Tentatively, `s3HTTP()` argument `check_region` argument now defaults to FALSE. (#45, #46, #106, #122, #185, #230)
# aws.s3 0.3.13
* `s3HTTP()` gains a `show_progress` logical argument specifying whether to print a progress bar for PUT, POST, and GET requests. (#235, h/t R. Roebuck)
* `head_object()` now simply returns as a logical without an extraneous class.
* New function `object_size()` provides a convenient wrapper around the "content-length" attribute of `head_object()`. (#234, h/t P. Roebuck)
* `object_exists()` is now implemented as a synonym for `head_object()` (#234, h/t P. Roebuck)
# aws.s3 0.3.12
* `s3write_using()` now attaches the correct file extension to the temporary file being written to (just as `s3read_using()` already did). (#226, h/t @jon-mago)
# aws.s3 0.3.11
* `s3sync()` gains a `direction` argument allowing for unidirectional (upload-only or download-only) synchronization. The default remains bi-directional.
* New functions `put_encryption()`, `get_encryption()`, and `delete_encryption()` implement bucket-level encryption so that encryption does not need to be specified for each `put_object()` call. (#183, h/t Dan Tenenbaum)
* Fixed typos in `s3sync()`. (#211, h/t Nirmal Patel)
* `put_bucket()` only includes a LocationConstraint body when the region != "us-east-1". (#171, h/t David Griswold)
# aws.s3 0.3.10
* Fixed a typo in `setup_s3_url()`. (#223, h/t Peter Foley)
* Signatures are now calculated correctly when a port is specified. (#221, h/t @rvolykh)
# aws.s3 0.3.9
* Fixed a bug in `s3write_using()`. (#205, h/t Patrick Miller)
* Bumped **aws.signature** dependency to v0.3.7 to take advantage of automatic credential loading. (#184, h/t Dan Tenenbaum)
* `acl` argument was ignored by `put_bucket()`. This is now fixed. (#172)
* The `base_url` argument in `s3HTTP()` now defaults to an environment variable - `AWS_S3_ENDPOINT` - or the AWS S3 default in order to facilitate using the package with S3-compatible storage. (#189, #191, #194)
# aws.s3 0.3.8
* `save_object()` now uses `httr::write_disk()` to avoid having to load a file into memory. (#158, h/t Arturo Saco)
# aws.s3 0.3.7
* Remove usage of `endsWith()` in two places to reduce (implicit) base R dependency. (#147, h/t Huang Pan)
# aws.s3 0.3.6
* Bump **aws.signature** dependency to 0.3.4. (#142, #143, #144)
# aws.s3 0.3.5
* Attempt to fix bug introduced in 0.3.4. (#142)
# aws.s3 0.3.4
* Update code and documentation to use aws.signature (>=0.3.2) credentials handling.
# aws.s3 0.3.3
* `put_object()` and `put_bucket() now expose explicit `acl` arguments. (#137)
* `get_acl()` and `put_acl()` are now exported. (#137)
* Added a high-level `put_folder()` convenience function for creating an empty pseudo-folder.
# aws.s3 0.3.2
* `put_bucket()` now errors if the request is unsuccessful. (#132, h/t Sean Kross)
* Fixed a bug in the internal function `setup_s3_url()` when `region = ""`.
# aws.s3 0.3.1
* DESCRIPTION file fix for CRAN.
# aws.s3 0.3.0
* CRAN (beta) release. (#126)
* `bucketlist()` gains both an alias, `bucket_list_df()`, and an argument `add_region` to add a region column to the output data frame.
# aws.s3 0.2.8
* Exported the `s3sync()` function. (#20)
* `save_object()` now creates a local directory if needed before trying to save. This is useful for object keys contains `/`.
# aws.s3 0.2.7
* Some small bug fixes.
* Updated examples and links to API documentation.
# aws.s3 0.2.6
* Tweak region checking in `s3HTTP()`.
# aws.s3 0.2.5
* Fix reversed argument order in `s3readRDS()` and `s3saveRDS()`.
* Fixed the persistent bug related to `s3readRDS()`. (#59)
* Updated some documentation.
# aws.s3 0.2.4
* Mocked up multipart upload functionality within `put_object()` (#80)
* Use `tempfile()` instead of `rawConnection()` for high-level read/write functions. (#128)
* Allow multiple CommonPrefix values in `get_bucket()`. (#88)
* `get_object()` now returns a pure raw vector (without attributes). (#94)
* `s3sync()` relies on `get_bucket(max = Inf)`. (#20)
* `s3HTTP()` gains a `base_url` argument to (potentially) support S3-compatible storage on non-AWS servers. (#109)
* `s3HTTP()` gains a `dualstack` argument provide support for "dual stack" (IPv4 and IPv6) support. (#62)
# aws.s3 0.2.3
* Fixed a bug in `get_bucket()` when `max = Inf`. (#127, h/t Liz Macfie)
# aws.s3 0.2.2
* Two new functions - `s3read_using()` and `s3write_using()` provide a generic interface to reading and writing objects from S3 using a specified function. This provides a simple and extensible interface for the import and export of objects (such as data frames) in formats other than those provided by base R. (#125, #99)
# aws.s3 0.2.1
* `s3HTTP()` gains a `url_style` argument to control use of "path"-style (new default) versus "virtual"-style URL paths. (#23, #118)
# aws.s3 0.2.0
* All functions now produce errors when requests fail rather than returning an object of class "aws_error". (#86)
# aws.s3 0.1.39
* `s3save()` gains an `envir` argument. (#115)
# aws.s3 0.1.38
* `get_bucket()` now automatically handles pagination based upon the specified number of objects to return. (PR #104, h/t Thierry Onkelinx)
* `get_bucket_df()` now uses an available (but unexported) `as.data.frame.s3_bucket()` method. The resulting data frame always returns character rather than factor columns.
# aws.s3 0.1.37
* Further changes to region vertification in `s3HTTP()`. (#46, #106 h/t John Ramey)
# aws.s3 0.1.36
* `bucketlist()` now returns (in addition to past behavior of printing) a data frame of buckets.
* New function `get_bucket_df()` returns a data frame of bucket contents. `get_bucket()` continues to return a list. (#102, h/t Dean Attali)
# aws.s3 0.1.35
* `s3HTTP()` gains a `check_region` argument (default is `TRUE`). If `TRUE`, attempts are made to verify the bucket's region before performing the operation in order to avoid confusing out-of-region errors. (#46)
* Object keys can now be expressed using "S3URI" syntax, e.g., `object = "s3://bucket_name/object_key"`. In all cases, the bucketname and object key will be extracted from this string (meaning that a bucket does not need to be explicitly specified). (#100; h/t John Ramey)
* Fixed several places where query arguments were incorrectly being passed to the API as object key names, producing errors.
# aws.s3 0.1.34
* Update and rename policy-related functions.
# aws.s3 0.1.33
* Exported the `get_bucket()` S3 generic and methods.
# aws.s3 0.1.32
* Fixed a bug related to the handling of object keys that contained spaces. (#84, #85; h/t Bao Nguyen)
# aws.s3 0.1.29
* Fixed a bug related to the handling of object keys that contained atypical characters (e.g., `=`). (#64)
* Added a new function `s3save_image()` to save an entire workspace.
* Added a temporary fix for GitHub installation using the DESCRIPTION `Remotes` field.
# aws.s3 0.1.25
* Added function `s3source()` as a convenience function to source an R script directly from S3. (#54)
# aws.s3 0.1.23
* Added support for S3 "Acceleration" endpoints, enabling faster cross-region file transfers. (#52)
* `s3save()`, `s3load()`, `s3saveRDS()`, and `s3readRDS()` no longer write to disk, improving performance. (#51)
# aws.s3 0.1.22
* Added new functions `s3saveRDS()` and `s3readRDS()`. (h/t Steven Akins, #50)
# aws.s3 0.1.21
* Operations on non-default buckets (outside "us-east-1") now infer bucket region from bucket object. Some internals were simplified to better handle this. (h/t Tyler Hunt, #47)
# aws.s3 0.1.18
* All functions now use snake case (e.g., `get_object()`). Previously available functions that did not conform to this format have been deprecated. They continue to work, but issue a warning. (#28)
* Separated authenticated and unauthenticated testthat tests, conditional on presence of AWS keys.
* Numerous documentation fixes and consolidations.
* Dropped XML dependency in favor of xml2. (#40)
# aws.s3 0.1.17
* The structure of an object of class "s3_bucket" has changed. It now is simply a list of objects of class "s3_object" and bucket attributes are stored as attributes to the list.
* The order of `bucket` and `object` names was swapped in most object-related functions and the Bucket name has been added to the object lists returned by `getbucket()`. This means that `bucket` can be omitted when `object` is an object of class "s3_object".
# aws.s3 0.1.1