Skip to main content

BLOBs in the Cloud with APEX and AWS S3

Overview

Recently, I was working with one of our customers and ran into a rather unique requirement and an uncommon constraint. The customer - Storm Petrel - has designed a grant management system called Tempest.  This system is designed to aid local municipalities when applying for FEMA grants after a natural disaster occurs.  As one can imagine, there is a lot of old fashioned paperwork when it comes to managing such a thing.

Thus, the requirement called for the ability to upload and store scanned documents.  No OCR or anything like that, but rather invoices and receipts so that a paper trail of the work done and associated billing activity can be preserved.  For APEX, this can be achieved without breaking a sweat, as the declarative BLOB feature can easily upload a file and store it in a BLOB column of a table, complete with filename and MIME type.

However, the tablespace storage costs from the hosting company for the anticipated volume of documents was considerable.  So much so that the cost would have to be factored into the price of the solution for each customer, making it more expensive and obviously less attractive.

My initial thought was to use Amazon’s S3 storage solution, since the costs of storing 1GB of data for a month is literally 3 cents.  Data transfer prices are also ridiculously inexpensive, and from what I have seen via marketing e-mails, the price of this and many of Amazon’s other AWS services have been on a downward trend for some time.

The next challenge was to figure out how to get APEX integrated with S3.  I have seen some of the AWS API documentation, and while there are ample examples for Java, .NET and PHP, there is nothing at all for PL/SQL.  Fortunately, someone else has already done the heavy lifting here: Morten Braten & Jeffrey Kemp.

Morten’s Alexandria PL/SQL Library is an amazing open-source suite of PL/SQL utilities which provide a number of different services, such as document generation, data integration and security.  Jeff Kemp has a presentation on SlideShare that best covers the breadth of this utility.  You can also read about the latest release - 1.7 - on Morton’s blog here.  You owe it to yourself to check out this library whether or not you have any interest in AWS S3!

In this latest release of the library, Jeff Kemp has added a number of enhancements to the S3 integration piece of the framework, making it quite capable of managing files on S3 via a set of easy to use PL/SQL APIs.  And these APIs can be easily & securely integrated into APEX and called from there.  He even created a brief presentation that describes the S3 APIs.

Configuring AWS Users and Groups

So let’s get down to it.  How does all of this work with APEX?  First of all, you will need to create an AWS account.  You can do this by navigating to http://aws.amazon.com/ and clicking on Sign Up.  The wizard will guide you through the account creation process and collect any relevant information that it needs.  Please note that you will need to provide a valid credit card in order to create an AWS account, as they are not free, depending on which services you choose to use.

Once the AWS account is created, the first thing that you should consider doing is creating a new user that will be used to manage the S3 service.  The credentials that you use when logging into AWS are similar to root, as you will be able to access and manage and of the many AWS services.  When deploying only S3, it’s best to create a user that can only do just that.

To create a new user:

1) Click on the Users tab

2) Click Create New User

3) Enter the User Name(s) and click Create.  Be sure that Generate an access key for each User is checked.

2014 05 08 10 49 18
Once you click Create, another popup region will be displayed.  Do not close this window!  Rather, click on Show User Security Credentials to display the Access Key ID and Secret Access Key ID.  Think of the Access Key ID as a username and the Secret Access Key ID as a password, and then treat them as such.

2014 05 08 10 51 29
For ease of use, you may want to click Download Credentials and save your keys to your PC.

The next step is to create a Group that your new user will be associated with.  The Group in AWS is used to map a user or users to a set of permissions.  In this case, we will need to allow our user to have full access to S3, so we will have to ensure that the permissions allow for this.  In your environment, you may not want to grant as many privileges to a single user.

To create a new group:

1) Click on the Groups tab

2) Click on Create New Group

3) Enter the Group Name, such as S3-Admin, and click Continue

The next few steps may vary depending on which privileges you want to assign to this group.  The example will assume that all S3 privileges are to be assigned.

4) Select Policy Generator, and then click on the Select button.

5) Set the AWS Service drop down to Amazon S3.

6) Select All Actions (*) for the Actions drop down.

7) Enter arn:aws:s3:::* for the Amazon Resource Name (ARN) and click Add Statement.  This will allow access to any S3 resource.  Alternatively, to create a more restricted group, a bucket name could have been specified here, limiting the users in this group to only be able to manage that specific bucket.

8) Click Continue.

9) Optionally rename the Policy Name to something a little less cryptic and click Continue.

10) Click Create group to create the group.
The animation below illustrates the previous steps:

Next, we’ll add our user to the newly created group.

1) Select the group that was just created by checking the associated checkbox.

2) Under the Users tab, click Add Users to Group.

3) Select the user that you want to add and then click Add Users.

The user should now be associated with the group.

2014 05 08 11 55 45
Select the Permissions tab to verify that the appropriate policy is associated with the user.

2014 05 08 11 56 15
At this point, the user management portion of AWS is complete.

Configuring AWS S3

The next step is to configure the S3 portion. To do this, navigate to the S3 Dashboard:

1) Click on the Services tab at the top of the page.

2) Select S3.

You should see the S3 dashboard now:

2014 05 08 13 53 35
S3 uses “buckets" to organize files.  A bucket is just another word for a folder.  Each of these buckets have a number of different properties that can be configured, making the storage and security options quite extensible.  While there is a limit of 100 buckets per AWS account, buckets can contain folders, and when using the AWS APIs, its fairly easy to provide a layer of security based on a file’s location within a bucket.

Let’s start out by creating a bucket and setting up some of the options.

1) Click on Create Bucket.

2) Enter a Bucket Name and select the Region closest to your location and click Create.  One thing to note - the Bucket Name must be unique across ALL of AWS.  So don’t even try demo, test or anything like that.

3) Once your bucket is created, click on the Properties button.

2014 05 08 14 01 07
I’m not going to go through all of the properties of a bucket in detail, as there are plenty of other places that already have that covered.  Fortunately, for our purposes, the default settings on the bucket should suffice.  It is worth taking a look at these settings, as many of them - such as Lifecycle and Versioning - can definitely come in handy and reduce your development and storage costs.

Next, let’s add our first file to the bucket.  To do this:

1) Click on the Bucket Name.

2) Click on the Upload button.

3) A dialog box will appear.  To add a file or files, click Add Files.

4) Using the File Upload window, select a file that you wish to upload.  Select it and click Open.

5) Click Start Upload to initiate the upload process.
Depending on your file size, the transfer will take anywhere from a second to several minutes.  Once it’s complete, your file should be visible in the left side of the dashboard.

6) Click on the recently uploaded file.

7) Click on the Properties button.

2014 05 08 14 18 29
Notice that there is a link to the file displayed in the Properties window.  Click on that link.  You’re probably looking at something like this now:

2014 05 08 14 20 59
That is because by default, all files uploaded to S3 will be secured.  You will need to call an AWS API to generate a special link in order to access them.  This is important for a couple of reasons.  First off, you clearly don’t want just anyone accessing your files on S3.  Second, even if securing files is not a major concern, keep in mind that S3 also charges for data transfer.  Thus, if you put a large public file on S3, and word gets out as to its location, charges can quickly add up as many people access that file.  Fortunately, securely accessing files on S3 from APEX is a breeze with the Alexandria PL/SQL libraries.  More on that shortly.

If you want to preview any file in S3, simply right-click on it and select Open or Download.  This is also how you rename and delete files in S3.  And only authorized AWS S3 users will be able to perform these tasks, as the S3 Dashboard requires a valid AWS account.

Installing AWS S3 PL/SQL Libraries

The next thing that needs to be installed is the Alexandria PL/SQL library - or rather just the Amazon S3 portion of it.  There is no need to install the rest of the components, especially if they are not going to be used.  For ease of use, these objects can be installed directly into your APEX parse-as schema.  
However, they can also be installed into a centralized schema and then made available to other schemas that need to use them.

There are eight files that need to be installed, as well as a one-off command.

1) First, connect to your APEX parse-as schema and run the following script:
create type t_str_array as table of varchar2(4000)
/
2) Next, the HTTP UTIL & DEBUG packages needs to be installed.  This allows the database to retrieve files from S3 as well as provides a debugging infrastructure.

To install these packages, run the following four scripts as your APEX parse-as schema:
/plsql-utils-v170/ora/http_util_pkg.pks
/plsql-utils-v170/ora/http_util_pkg.pkb
/plsql-utils-v170/ora/debug_pkg.pkb
/plsql-utils-v170/ora/debug_pkg.pkb
Before running the S3 scripts, the package body of AMAZON_AWS_AUTH_PKG needs to be modified so that your AWS credentials are embedded in it.

3) Edit the file amazon_aws_auth_pkg.pkb in a text editor.

4) Near the top of the file are three global variable declarations: g_aws_id, g_aws_key and g_gmt_offset.  Set the values of these three variables to the Access Key ID, Secret Key ID and GMT offset.  These values were displayed and/or downloaded when you created your AWS user.  If you did not record these, you will have to create a new pair back in the User Management dashboard.
Here’s an example of what the changes will look like, with the values obfuscated:
g_aws_id     varchar2(20) := 'XXXXXXXXXXXXXXXXXXX'; -- AWS Access Key ID
g_aws_key    varchar2(40) := 'XXXXXXXXXXXXXXXXXXX'; -- AWS Secret Key
g_gmt_offset number       := 4; -- your timezone GMT adjustment (EST = 4, CST = 5, MST = 6, PST = 7) 
It is also possible to store these values in a more secure place, such as an encrypted column in a table and then fetch them as they are needed.

5) Once the changes to amazon_aws_auth_pkg.pkb are made, save the file.

6) Next, run the following four SQL scripts in the order below as your APEX parse-as schema:
/plsql-utils-v170/ora/amazon_aws_auth_pkg.pks
/plsql-utils-v170/ora/amazon_aws_auth_pkg.pkb
/plsql-utils-v170/ora/amazon_aws_s3_pkg.pks
/plsql-utils-v170/ora/amazon_aws_s3_pkg.pks
If there are no errors, then we’re almost ready to test and see if we can view S3 via APEX!

IMPORTANT NOTE
: The S3 packages in their current form do not offer support for SSL.  This is a big deal, since any request that is made to S3 will be done in the clear, putting the contents of your files at risk as they are transferred to and from S3. There is a proposal on the Alexandria Issues Page that details this deficiency.

I have made some minor alterations to the AMAZON_AWS_S3_PKG package which accommodate using SSL and Oracle Wallet when calling S3. You can download it from here.  When using this version, there are three additional package variables that need to be altered:
 
g_orcl_wallet_path constant varchar2(255) := 'file:/path_to_dir_with_oracle_wallet';
g_orcl_wallet_pw   constant varchar2(255) := 'Oracle Wallet Password';
g_aws_url_http     constant varchar2(255) := 'https://'; -- Set to either http:// or https:// 
Additionally, Oracle Wallet will need to be configured with the certificate from s3.amazonaws.com.  Jeff Hunter has an excellent and easy to follow post on configuring Oracle Wallet here that will guide you through configuring Oracle Wallet.  

Configuring the ACL

Starting with Oracle 11g, an Access Control List - or ACL - restricts which outbound transactions are allowed to occur.  By default, none of them are.  Thus, we will need to configure the ACL to allow our schema to access Amazon’s servers.
 
To create the ACL, run the following script as SYS, replacing it with your specific values:
BEGIN
DBMS_NETWORK_ACL_ADMIN.CREATE_ACL
(
acl         => 'apex-s3.xml',   
description => 'ACL for APEX-S3 to access Amazon S3',   
principal   => 'APEX_S3',    
is_grant    => TRUE,    
privilege   => 'connect'   
);
DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL     
 
acl         => 'apex-s3.xml',   
host        => '*.amazonaws.com',   
lower_port  => 80,   
upper_port  => 80   
);
COMMIT;
END;
/
IMPORTANT NOTE: When using SSL, the lower_port and upper_port values should both be set to 443.

Integration with APEX

At this point, the integration between Oracle and S3 should be complete.  We can run a simple test to verify this.  
From the SQL Workshop, enter and execute the following SQL, replacing apex-s3-integration with the name of your bucket: 
SELECT * FROM table (amazon_aws_s3_pkg.get_object_tab(p_bucket_name => 'apex-s3-integration'))

This query will return all files that are stored in the bucket apex-s3-integration, as shown below:

2014 05 08 14 56 42

If you see the file that you previously uploaded, then everything is working as it should!
 
Now that the configuration is complete, the next step is to build the actual integration into APEX so that we can upload and download BLOBs to/from S3.  In this example, we will build a simple APEX form & report that allows the user to upload, download and delete content from an S3 bucket.  This example uses APEX 4.2.5, but it should work in previous releases, as it uses nothing too version specific.

To create a simple APEX & S3 Integration application:
 
1) Create a new Database application.  Be sure to set Create Options to Include Application Home Page, and select any theme that you wish.  This example will use Theme 25.
 
2) Edit Page 1 of your application and create a new Interactive Report called “S3 Files”.  Use the following SQL as the source of the report, replacing apex-s3-integration with your bucket name:
SELECT
key,
size_bytes,
last_modified,
amazon_aws_s3_pkg.get_download_url
(
p_bucket_name => 'apex-s3-integration',
p_key => key,
p_expiry_date => SYSDATE + 1
) download,
key delete_doc
FROM
table (amazon_aws_s3_pkg.get_object_tab(p_bucket_name => 'apex-s3-integration')) 
The SQL call the pipeline function get_object_tab from the S3 package, and return all files in the corresponding bucket.  Since the AWS credentials are embedded in the S3 package, they do not need to be entered here.
 
Since the bucket that we created is not open to the public, all access to it is restricted.  Only when a specific Access Key ID, Signature and Expiration Date are passed in will the file be accessible.  To do this, we need to call another API to generate a valid link with that information included.  The get_download_url API takes in three parameters - bucket_name, key and expiry date.  The first two are self-explanatory, whereas the third will determine how long the specific download link will be good for.  In our example, it is set to 1 day, but it can be set to any duration of time.  
 
Once the report is created, run and login to your application. The first page will display something similar to this:
 
2014 05 15 07 31 18
Upon closer inspection of the Download link, the AWSAccessKeyId, Expires and Signature parameters can be seen.  These were automatically generated from the S3 API, and this link can be used to download the corresponding file.  
 
3) Edit the KEY column of the Interactive Report.  Enter #KEY# for the Link Text, set the Target to URL and enter #DOWNLOAD# for the URL and click Apply Changes.
 
Create a Download Link on the KEY Column
 
Next, since we no longer need to display the Download column in the report, it should be set to hidden.
 
4) Edit the Report Attributes of the Interactive Report.
 
5) Set Display Text As for the Download column to Hidden and click Apply Changes.
 
Set the Download column to Hidden
 
When running Page 1 now, the name of the file should be a link.  Clicking on the link should either display or download the file from S3, depending on what type of file was uploaded.
 
To delete an item, we will have to call the S3 API delete_object and pass in the corresponding bucket name and key of the file to be deleted.  We can handle this easily via a column link that calls a Dynamic Action.
 
Before we get started, we’ll need to create a hidden item on Page 1 that will store the file key.
 
6) Create a new Hidden Item on Page 1 called P1_KEY.
 
7) Make sure that the Value Protected attribute is set to Yes, take all the defaults on the next page, and create the item.
 
Next, let’s edit the DELETE_DOC column and set the link that will trigger a Dynamic Action which will in turn, delete the document from S3.
 
8) Edit the DELETE_DOC column.
 
9) Enter Delete for the Link Text and enter the following for the Link Attributes: id="#KEY” class=“deleteDoc”  Next, set the Target to URL and enter # in the URL field and click Apply Changes.
 
Set the Link Attributes of the DELETE_DOC Column
 
Next, let’s create the Dynamic Action.
 
10) Create a new Dynamic Action called Delete S3 Doc.  It should be set to fire when the Event is a Click, and when the jQuery Selector is equal to .deleteDoc
 
Definition of the Delete S3 Doc Dynamic Action
 
11) Next, the first True action should be set to Confirm, and the message “Are you sure that you want to delete this file” be entered into the Text region.
 
Add the Confirm True Action
 
12) Click Next and then click Create Dynamic Action.
 
At this point, if you click on the Delete link, a confirmation message should be displayed.  Clicking OK will have no impact, as additional True actions need to be first added to the Dynamic Action.  Let’s create those True actions now.
 
13) Expand the Delete S3 Doc Dynamic Action and right-click on True and select Create.
 
14) Set the Action to Set Value, uncheck Fire on Page Load, and enter the following for JavaScript Expressionthis.triggeringElement.id;  Next, set the Selection Type to Item(s) and enter P1_KEY for the Item(s) and click Create.
 
Create the Set Value True Action
 
15) Create another True action by clicking on Add True Action.
 
16) Set the Action to Execute PL/SQL Code and enter the following for PL/SQL Code, replacing apex-s3-integration with the name of your bucket:
amazon_aws_s3_pkg.delete_object 
  (
  p_bucket_name => 'apex-s3-integration',
  p_key => :P1_KEY
  );

Enter P1_KEY for Page Items to Submit and click Create.

Create Execute PL/SQL Code True Action

 
17) Create another True action by clicking on Add True Action.
 
18) Set the Action to Refresh, set the Selection Type to Region, and set the Region to S3 Files (20) and click Create.
 
Create a Refresh True Action
 
At this point, if you click on the Delete link for a file in S3, you should be prompted to confirm the deletion, and if you click OK, the file will be removed from S3 and the Interactive Report refreshed.  A quick check of your AWS S3 Dashboard should show that the file is in fact, deleted.  
 
The last step is to create a page that allows the user to upload files to S3.  This can easily be done with a File Upload APEX item and a simple call to the S3 API new_doc.
 
19) Create a new blank page called Upload S3 File and set the Page Number to 2.  Re-use the existing tab set and tab and optionally add a breadcrumb to the page and set the parent to the Home page.
 
20) On Page 2, create a new HTML region called Upload S3 File.
 
Next, we’ll add a File Browse item to the page.  This item will use APEX’s internal table WWV_FLOW_FILES to temporarily store the BLOB file before uploading it to S3.
 
21) In the new region, create a File Browse item called P2_DOC and click Next.
 
22) Set the Label to Select File to Upload and the Template to Required (Horizontal - Right Aligned) and click Next.
 
23) Set Value Required to Yes and Storage Type to Table WWV_FLOW_FILES and click Next.
 
24) Click Create Item.
 
Next, we’ll add a button that will submit the page.
 
25) Create a Region Button in the Upload S3 File region.
 
26) Set the Button Name to UPLOAD, set the Label to Upload, set the Button Style to Template Based Button and set the Button Template to Button and click Next.
 
27) Set the Position to Region Template Position #CREATE# and click Create Button.
 
Next, the PL/SQL Process that will call the S3 API needs to be created.
 
28) Create a new Page Process in the Page Processing region.
 
29) Select PL/SQL and click Next.
 
30) Enter Upload File to S3 for the Name and click Next.
 
31) Enter the following for the region Enter PL/SQL Page Process, replacing apex-s3-integration with the name of your bucket and click Next.
FOR x IN (SELECT * FROM wwv_flow_files WHERE name = :P2_DOC)
LOOP
 -- Create the file in S3
amazon_aws_s3_pkg.new_object
(
p_bucket_name => 'apex-s3-integration',
p_key => x.filename,
p_object => x.blob_content,
p_content_type => x.mime_type
);
END LOOP;
-- Remove the doc from WWV_FLOW_FILES
DELETE FROM wwv_flow_files WHERE name = :P2_DOC;
The PL/SQL above will loop through the table WWV_FLOW_FILES for the document just uploaded and pass the filename, MIME type and file itself to the new_object S3 API, which in turn will upload the file to S3.  The last line will ensure that the file is immediately removed from the WWV_FLOW_FILES table.
 
32) Enter File Uploaded to S3 for the Success Message and click Next.
 
33) Set When Button Pressed to UPLOAD (Upload) and click Create Process.
 
One more thing needs to be added - a branch that returns to Page 1 after the file is uploaded.
 
34) In the Page Processing region, expand the After Processing node and right-click on Branches and select Create.
 
35) Enter Branch to Page 1 for the Name and click Next.
 
36) Enter 1 for Page, check Include Process Success Message and click Next.
 
37) Set When Button Pressed to UPLOAD (Upload) and click Create Branch.
 
Now, you should be able to upload any file to your S3 bucket from APEX!
 
One more small addition needs to be made to our example application.  We need a way to get to Page 2 from Page 1.  A simple region button should do the trick.
 
38) Edit Page 1 of your application.
 
39) Create a Region Button in the S3 Files region.
 
40) Set the Button Name to UPLOAD, set the Label to Upload, set the Button Style to Template Based Button and set the Button Template to Button and click Next.
 
41) Set the Position to Right of the Interactive Report Search Bar and click Next.
 
42) Set the Action to Redirect to Page in this Application, enter 2 for Page and click Create Button.
 
That’s it!  You should now have a working APEX application that has the ability to upload, download and delete files from Amazon’s S3 service.
 
Finished Product
 
Please leave any questions or report any typos in the comments, and I’ll get to them as soon as time permits.

Comments

trent said…
Nice Guide Scott.

Just a quick comment - in your overview, you mentioned to store 1TB of data it is 3 cents/month. I understood this to be 3 cents per GB up to the first TB.

Some adjustments I made to the package(s).

- Rather than using the g_gmt_offset variable, I adjusted the function (get_date_string) that returns the current date to return it in UTC format without the dependency of g_gmt_offset being accurate: to_char(sys_extract_utc(systimestamp), 'Dy, DD Mon YYYY HH24:MI:SS', 'NLS_DATE_LANGUAGE = AMERICAN') || ' GMT'.

- Enabling server side encryption. In the assignment of l_auth_string in the new_object procedure, pass in the header: x-amz-server-side-encryption: AES256 and extend l_header_names and l_header_values arrays to pass in x-amz-server-side-encryption and AES256 respectively.

- Enabling reduced redundancy storage - for non prod systems it's probably not necessary to use the standard storage class. In the assignment of l_auth_string in the new_object procedure, pass in the header: x-amz-storage-class: STANDARD or REDUCED_REDUNDANCY and extend l_header_names and l_header values arrays to pass in x-amz-storage-class and STANDARD or REDUCED_REDUNDANCY respectively.
Scott said…
Trent,

Thanks for your feedback; I've updated the post to reflect the accurate pricing information.

Your other suggestions touch on some additional capabilities of S3; encryption and redundancy. Both of these should be considered for sensitive production data, as they offer additional protection, especially when used together.

Thanks,

- Scott -
Hi,
I set this up and it's working fine, with one problem.
I've created a folder in the bucket using Greek characters and when retrieving the list those characters are garbage.

I'm using Oracle XE 10g with NLS_CHARACTERSET = AL32UTF8. Any ideas how this could work?
Brad Peek said…
I'm going to give this a try, and really appreciate the article. You mention that in order to create a wallet with the correct certificates I would need to get them from s3.amazonaws.com. Has this changed since you originally wrote the article? I get a redirect to https://aws.amazon.com/s3/ when I try it and I'm not sure if that is correct (don't want to assume).
Scott said…
Brad,

Use this URL: https://s3.amazonaws.com/bucket_name/foo

Even though the access is blocked, you should be able to get the certificate from that page.

Thanks,

- Scott -
Brad Peek said…
Thanks. That probably would have worked. What I did was "open" a text file from the S3 console and exported the certs from the resulting URL. It appears to have worked, although I won't know for sure until I get further along.
Scott said…
Same difference; anything to load a page on that domain will do it.

- Scott -
Brad Peek said…
Scott -- I ran into "The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256", which is discussed here: https://github.com/mortenbra/alexandria-plsql-utils/issues/24?_pjax=%23js-repo-pjax-container

My APEX instance is hosted at Enciva.com and is currently Oracle 11. The GitHub issue comments mentions that the Oracle 12c dbms_crypto package has added HMAC_SH256 support. I can ask Enciva to upgrade my instance to Oracle 12c, but was wondering what else I would need to do to make this work? Not sure I have the skills or patience for a major change to the packages that Morten and you have so generously provided.

Did you also run into this issue? Find a solution?
Scott said…
Brad,

Upgrading to 12c is likely the best approach. There is a patch that will add the new cyphers to 11g as well, but it took quite a while for us to get that to work on another instance. You are welcome to pursue that path as well, but you'll have to work with Oracle Support to get the patch applied.

- Scott -
Brad Peek said…
To get around the AWS4-HMAC-SHA256 issue, I created an AWS bucket in the US East (N. Virginia) region, which I think should support the older cypher that your package uses. I get to the point of running "SELECT * FROM table (amazon_aws_s3_pkg.get_object_tab(p_bucket_name => 'my-bucket-name'));" but zero rows are returned. I don't get any error messages, which I think means I am authenticating and also specifying the correct bucket name. For example, if I put in a bogus bucket name I get an error message to that effect. Any idea why that select would not find the documents in that bucket? I tried making the files "public" just to see if it was a permissions issue and have also tried a couple of different IAM users with no luck.

I installed the AWS CLI onto my laptop and can list the document using the same IAM user and bucket name. I would LOVE to get this working, so any help would be appreciated.
Brad Peek said…
This is regarding an earlier (unpublished) post where I asked for help with troubleshooting the issue of not getting rows returned from the sample select that you provided. I found the problem was because of an edit I had made to your AWS S3 package that allows support of HTTPS. I had thought one of the constant values was specific to your environment and I changed it. Wrong. So, thanks again for the article and I will now take the next step of trying to get it to work with APEX.

Popular posts from this blog

Custom Export to CSV

It's been a while since I've updated my blog. I've been quite busy lately, and just have not had the time that I used to. We're expecting our 1st child in just a few short weeks now, so most of my free time has been spent learning Lamaze breathing, making the weekly run to Babies R Us, and relocating my office from the larger room upstairs to the smaller one downstairs - which I do happen to like MUCH more than I had anticipated. I have everything I need within a short walk - a bathroom, beer fridge, and 52" HD TV. I only need to go upstairs to eat and sleep now, but alas, this will all change soon... Recently, I was asked if you could change the way Export to CSV in ApEx works. The short answer is, of course, no. But it's not too difficult to "roll your own" CSV export procedure. Why would you want to do this? Well, the customer's requirement was to manipulate some data when the Export link was clicked, and then export it to CSV in a forma...

Refreshing PL/SQL Regions in APEX

If you've been using APEX long enough, you've probably used a PL/SQL Region to render some sort of HTML that the APEX built-in components simply can't handle. Perhaps a complex chart or region that has a lot of custom content and/or layout. While best practices may be to use an APEX component, or if not, build a plugin, we all know that sometimes reality doesn't give us that kind of time or flexibility. While the PL/SQL Region is quite powerful, it still lacks a key feature: the ability to be refreshed by a Dynamic Action. This is true even in APEX 5. Fortunately, there's a simple workaround that only requires a small change to your code: change your procedure to a function and call it from a Classic Report region. In changing your procedure to a function, you'll likely only need to make one type of change: converting and htp.prn calls to instead populate and return a variable at the end of the function. Most, if not all of the rest of the code can rem...

Manipulating Images with the... Database?

A recent thread on the OTN HTML DB Forum asked about how to determine the width & height of an image stored as a BLOB in an Oracle table. I mentioned in that thread that I have some code to manipulate an image stored in a BLOB column. This is particularly useful if you’re going to let users upload images, and you want to re-size them to display as a thumbnail. Thanks to Oracle interMedia , it is trivial to manipulate the width, height, and other attributes of images stored in an Oracle table. I’ve created a sample application here which demonstrates Oracle interMedia and HTML DB in action. Feel free to have a look. You can download this application from HTML DB Studio as well. Basically, this application allows you to upload images and perform an operation on the image as it is inserted into the PHOTO_CATALOG table. There are two places where some PL/SQL code is required: an After Submit process on page 2, and a procedure to display the images. Here is the PL/SQL for the After...